threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Run the attached script and you'll get: \n\npsql -f error.sql -d test \npsql:error.sql:37: ERROR: attribute number 6 exceeds number of columns 5\n\nSplitting up the alter-table like this makes it work: \n\nalter table access\n add column start_timestamp timestamp not null DEFAULT CURRENT_TIMESTAMP,\n add column end_timestamp timestamp\n ;\n alter table access add column tsrange TSRANGE NOT NULL GENERATED ALWAYS AS \n(tsrange(start_timestamp, end_timestamp, '[)')) STORED\n ; \n\n-- \n\nAndreas Joseph Krogh",
"msg_date": "Tue, 26 Nov 2019 15:04:51 +0100 (CET)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "ERROR: attribute number 6 exceeds number of columns 5"
},
{
"msg_contents": "Andreas Joseph Krogh <andreas@visena.com> writes:\n> Run the attached script and you'll get: \n\n> psql -f error.sql -d test \n> psql:error.sql:37: ERROR: attribute number 6 exceeds number of columns 5\n\nHmm, interesting. IMO, that *should* have thrown an error, but of\ncourse not that one. The ADD COLUMN operations are all processed\nin parallel, so it's not okay for one of them to have a GENERATED\nexpression that refers to another one of the new columns. But you\nshould have gotten a \"no such column\" type of error, not a run-time\ncross-check failure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Nov 2019 10:49:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: attribute number 6 exceeds number of columns 5"
},
{
"msg_contents": "At Tue, 26 Nov 2019 10:49:11 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Andreas Joseph Krogh <andreas@visena.com> writes:\n> > Run the attached script and you'll get: \n> \n> > psql -f error.sql -d test \n> > psql:error.sql:37: ERROR: attribute number 6 exceeds number of columns 5\n> \n> Hmm, interesting. IMO, that *should* have thrown an error, but of\n> course not that one. The ADD COLUMN operations are all processed\n> in parallel, so it's not okay for one of them to have a GENERATED\n> expression that refers to another one of the new columns. But you\n> should have gotten a \"no such column\" type of error, not a run-time\n> cross-check failure.\n\nSomething like this works?\n\nALTER TABLE gtest25 ADD COLUMN x int, ADD COLUMN y int GENERATED ALWAYS AS (x * 4) STORED;\nERROR: column \"x\" does not exist\nDETAIL: An expression cannot reference columns added in the same command.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 27 Nov 2019 12:00:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: attribute number 6 exceeds number of columns 5"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Tue, 26 Nov 2019 10:49:11 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> Hmm, interesting. IMO, that *should* have thrown an error, but of\n>> course not that one. The ADD COLUMN operations are all processed\n>> in parallel, so it's not okay for one of them to have a GENERATED\n>> expression that refers to another one of the new columns. But you\n>> should have gotten a \"no such column\" type of error, not a run-time\n>> cross-check failure.\n\n> Something like this works?\n\nI started to look at this, but it felt a bit brute-force to me.\nAfter awhile I began to think that my offhand comment above was\nwrong --- why *shouldn't* this case work? When we insert or\nupdate a tuple, we expect that GENERATED columns should be\ncomputed based on the new tuple values, so why is the executor\nevidently evaluating them based on the old tuple?\n\nThat thought soon led me to realize that there's an adjacent\nbug that this patch fails to fix:\n\nregression=# create table foo (f1 int);\nCREATE TABLE\nregression=# insert into foo values(1),(2);\nINSERT 0 2\nregression=# alter table foo alter column f1 type float8, add column f2 int generated always as (f1 * 2) stored;\nERROR: attribute 1 of type foo has wrong type\nDETAIL: Table has type integer, but query expects double precision.\n\nSo I believe that the real problem here is that the executor is\nevaluating GENERATED expressions at the wrong time. It's evaluating\nthem against the pre-conversion tuples when it should be evaluating\nthem against the post-conversion tuples. We need to go fix that,\nrather than inserting arbitrary restrictions in the DDL code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 11:53:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: attribute number 6 exceeds number of columns 5"
},
{
"msg_contents": "I wrote:\n> So I believe that the real problem here is that the executor is\n> evaluating GENERATED expressions at the wrong time. It's evaluating\n> them against the pre-conversion tuples when it should be evaluating\n> them against the post-conversion tuples. We need to go fix that,\n> rather than inserting arbitrary restrictions in the DDL code.\n\nI looked at that more closely, and realized that blaming the executor\nis wrong: the real issue is that ALTER TABLE itself supposes that it\nneed only evaluate expressions against the old tuple. That's easy\nto fix with a bit more code though. I propose the attached.\n\n(Note that this should also allow relaxing the existing implementation\nrestriction against changing types of columns that GENERATED columns\ndepend on: all we have to do is re-parse the generation expression\nand schedule it for evaluation. I've not looked into that, and it\ndoesn't seem like a bug fix anyway.)\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 07 Jan 2020 12:47:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: attribute number 6 exceeds number of columns 5"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nSome ORMs may generate queries larger than the maximum possible value of \ntrack_activity_query_size (100 kB).\nIs there any reasons to limit the maximum value of \ntrack_activity_query_size to such small value?\nIncreasing the maximum value to 1 MB will help partially solve this \nproblem.\nAlso in the file postgresql.conf.sample pointed maximum value \ntrack_activity_query_size (before that it was not specified).\n\n--\nVyacheslav Makarov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 26 Nov 2019 17:59:25 +0300",
"msg_from": "v.makarov@postgrespro.ru",
"msg_from_op": true,
"msg_subject": "[PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "\nGood question. I am in favor of allowing a larger value if no one\nobjects. I don't think adding the min/max is helpful.\n\n---------------------------------------------------------------------------\n\nOn Tue, Nov 26, 2019 at 05:59:25PM +0300, v.makarov@postgrespro.ru wrote:\n> Hi Hackers,\n> \n> Some ORMs may generate queries larger than the maximum possible value of\n> track_activity_query_size (100 kB).\n> Is there any reasons to limit the maximum value of track_activity_query_size\n> to such small value?\n> Increasing the maximum value to 1 MB will help partially solve this problem.\n> Also in the file postgresql.conf.sample pointed maximum value\n> track_activity_query_size (before that it was not specified).\n> \n> --\n> Vyacheslav Makarov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index ba4edde71a..0e64dc1dbb 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -3200,7 +3200,7 @@ static struct config_int ConfigureNamesInt[] =\n> \t\t\tGUC_UNIT_BYTE\n> \t\t},\n> \t\t&pgstat_track_activity_query_size,\n> -\t\t1024, 100, 102400,\n> +\t\t1024, 100, 1048576,\n> \t\tNULL, NULL, NULL\n> \t},\n> \n> diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\n> index 46a06ffacd..55d3bfbfd0 100644\n> --- a/src/backend/utils/misc/postgresql.conf.sample\n> +++ b/src/backend/utils/misc/postgresql.conf.sample\n> @@ -569,7 +569,7 @@\n> #track_counts = on\n> #track_io_timing = off\n> #track_functions = none\t\t\t# none, pl, all\n> -#track_activity_query_size = 1024\t# (change requires restart)\n> +#track_activity_query_size = 1024\t# range 100B - 1MB (change requires restart)\n> #stats_temp_directory = 'pg_stat_tmp'\n> \n> \n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 19 Dec 2019 10:41:10 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Good question. I am in favor of allowing a larger value if no one\n> objects. I don't think adding the min/max is helpful.\n\nI think there are pretty obvious performance and memory-consumption\npenalties to very large track_activity_query_size values. Who exactly\nare we really helping if we let them set it to huge values?\n\n(wanders away wondering if we have suitable integer-overflow checks\nin relevant code paths...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Dec 2019 10:59:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Good question. I am in favor of allowing a larger value if no one\n> > objects. I don't think adding the min/max is helpful.\n>\n> I think there are pretty obvious performance and memory-consumption\n> penalties to very large track_activity_query_size values. Who exactly\n> are we really helping if we let them set it to huge values?\n\nThe original poster.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Dec 2019 12:52:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On 19.12.2019 20:52, Robert Haas wrote:\n> On Thu, Dec 19, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> Good question. I am in favor of allowing a larger value if no one\n>>> objects. I don't think adding the min/max is helpful.\n>>\n>> The original poster.\n\n\nAnd probably anyone else, who debugs stuck queries of yet another crazy \nORM. Yes, one could use log_min_duration_statement, but having a \npossibility to directly get it from pg_stat_activity without eyeballing \nthe logs is nice. Also, IIRC log_min_duration_statement applies only to \ncompleted statements.\n\n> I think there are pretty obvious performance and memory-consumption\n> penalties to very large track_activity_query_size values. Who exactly\n> are we really helping if we let them set it to huge values?\n>\n> (wanders away wondering if we have suitable integer-overflow checks\n> in relevant code paths...)\n\n\nThe value of pgstat_track_activity_query_size is in bytes, so setting it \nto any value below INT_MAX seems to be safe from that perspective. \nHowever, being multiplied by NumBackendStatSlots its reasonable value \nshould be far below INT_MAX (~2 GB).\n\nSincerely, It does not look for me like something badly needed, but \nstill. We already have hundreds of GUCs and it is easy for a user to \nbuild a sub-optimal configuration, so does this overprotection make sense?\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Fri, 20 Dec 2019 14:35:37 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 02:35:37PM +0300, Alexey Kondratov wrote:\n> On 19.12.2019 20:52, Robert Haas wrote:\n> > On Thu, Dec 19, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > Good question. I am in favor of allowing a larger value if no one\n> > > > objects. I don't think adding the min/max is helpful.\n> > > \n> > > The original poster.\n> \n> \n> And probably anyone else, who debugs stuck queries of yet another crazy ORM.\n> Yes, one could use log_min_duration_statement, but having a possibility to\n> directly get it from pg_stat_activity without eyeballing the logs is nice.\n> Also, IIRC log_min_duration_statement applies only to completed statements.\n\nYes, you would need log_statement = true.\n\n> > I think there are pretty obvious performance and memory-consumption\n> > penalties to very large track_activity_query_size values. Who exactly\n> > are we really helping if we let them set it to huge values?\n> > \n> > (wanders away wondering if we have suitable integer-overflow checks\n> > in relevant code paths...)\n> \n> \n> The value of pgstat_track_activity_query_size is in bytes, so setting it to\n> any value below INT_MAX seems to be safe from that perspective. However,\n> being multiplied by NumBackendStatSlots its reasonable value should be far\n> below INT_MAX (~2 GB).\n> \n> Sincerely, It does not look for me like something badly needed, but still.\n> We already have hundreds of GUCs and it is easy for a user to build a\n> sub-optimal configuration, so does this overprotection make sense?\n\nI can imagine using larger pgstat_track_activity_query_size values for\ndata warehouse queries, where they are long and there are only a few of\nthem.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 20 Dec 2019 08:57:04 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 08:57:04AM -0500, Bruce Momjian wrote:\n> I can imagine using larger pgstat_track_activity_query_size values for\n> data warehouse queries, where they are long and there are only a few of\n> them.\n\nWhy are those queries that long anyway? A too long IN clause with an\ninsane amount of parameters which could be replaced by an ANY clause\nwith an array?\n--\nMichael",
"msg_date": "Sat, 21 Dec 2019 11:47:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "Here is what ORMs do:\n\nselect length('SELECT \"column_name_1001\", \"column_name_1002\",\n\"column_name_1003\", \"column_name_1004\", \"column_name_1005\",\n\"column_name_1006\", \"column_name_1007\", \"column_name_1008\",\n\"column_name_1009\", \"column_name_1010\", \"column_name_1011\",\n\"column_name_1012\", \"column_name_1013\", \"column_name_1014\",\n\"column_name_1015\", \"column_name_1016\", \"column_name_1017\",\n\"column_name_1018\", \"column_name_1019\", \"column_name_1020\",\n\"column_name_1021\", \"column_name_1022\", \"column_name_1023\",\n\"column_name_1024\", \"column_name_1025\", \"column_name_1026\",\n\"column_name_1027\", \"column_name_1028\", \"column_name_1029\",\n\"column_name_1030\", \"column_name_1031\", \"column_name_1032\",\n\"column_name_1033\", \"column_name_1034\", \"column_name_1035\",\n\"column_name_1036\", \"column_name_1037\", \"column_name_1038\",\n\"column_name_1039\", \"column_name_1040\", \"column_name_1041\",\n\"column_name_1042\", \"column_name_1043\", \"column_name_1044\",\n\"column_name_1045\", \"column_name_1046\", \"column_name_1047\",\n\"column_name_1048\", \"column_name_1049\", \"column_name_1050\" FROM\n\"some_table\";');\n length\n--------\n 1024\n(1 row)\n\nThat's it – with default settings, you won't see WHERE clause or\nanything else.\n\nIt is not only about analytical workloads. I see it for regular OLTP\nworkloads literally *any* large project that use an ORM. Ruby on Rails'\nActiveRecord does it, Java'sHibernate does, and so on.\n\nAs a result, many queries exceed track_activity_query_size, and we\nend up having queries trimmed in pg_stat_activity. Why it is bad:- it\nmakes an automated analysis involving pg_stat_activity impossible,\nit complicates any manual troubleshooting involving pg_stat_activity.\n\nChanging this parameter in a mission-critical database is difficult\nbecause it requires a restart.\n\n+1 for changing it to 1M or at least to 100k. If the penalty is significant,\nat least 10k.\n\nWhat is the overhead here except the memory consumption?\nConsumption of, say,100 * 1M = 1MiB of RAM is a low price for much\nbetter transparency here. But what about the performance penalty?\nSome benchmark would be nice to answer this.\n\nOn Fri, Dec 20, 2019 at 6:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Dec 20, 2019 at 08:57:04AM -0500, Bruce Momjian wrote:\n> > I can imagine using larger pgstat_track_activity_query_size values for\n> > data warehouse queries, where they are long and there are only a few of\n> > them.\n>\n> Why are those queries that long anyway? A too long IN clause with an\n> insane amount of parameters which could be replaced by an ANY clause\n> with an array?\n> --\n> Michael\n>\n\nHere is what ORMs do:select length('SELECT \"column_name_1001\", \"column_name_1002\", \"column_name_1003\", \"column_name_1004\", \"column_name_1005\", \"column_name_1006\", \"column_name_1007\", \"column_name_1008\", \"column_name_1009\", \"column_name_1010\", \"column_name_1011\", \"column_name_1012\", \"column_name_1013\", \"column_name_1014\", \"column_name_1015\", \"column_name_1016\", \"column_name_1017\", \"column_name_1018\", \"column_name_1019\", \"column_name_1020\", \"column_name_1021\", \"column_name_1022\", \"column_name_1023\", \"column_name_1024\", \"column_name_1025\", \"column_name_1026\", \"column_name_1027\", \"column_name_1028\", \"column_name_1029\", \"column_name_1030\", \"column_name_1031\", \"column_name_1032\", \"column_name_1033\", \"column_name_1034\", \"column_name_1035\", \"column_name_1036\", \"column_name_1037\", \"column_name_1038\", \"column_name_1039\", \"column_name_1040\", \"column_name_1041\", \"column_name_1042\", \"column_name_1043\", \"column_name_1044\", \"column_name_1045\", \"column_name_1046\", \"column_name_1047\", \"column_name_1048\", \"column_name_1049\", \"column_name_1050\" FROM \"some_table\";'); length-------- 1024(1 row)That's it – with default settings, you won't see WHERE clause oranything else.It is not only about analytical workloads. I see it for regular OLTPworkloads literally *any* large project that use an ORM. Ruby on Rails'ActiveRecord does it, Java'sHibernate does, and so on.As a result, many queries exceed track_activity_query_size, and weend up having queries trimmed in pg_stat_activity. Why it is bad:- itmakes an automated analysis involving pg_stat_activity impossible,it complicates any manual troubleshooting involving pg_stat_activity.Changing this parameter in a mission-critical database is difficultbecause it requires a restart.+1 for changing it to 1M or at least to 100k. If the penalty is significant,at least 10k.What is the overhead here except the memory consumption?Consumption of, say,100 * 1M = 1MiB of RAM is a low price for muchbetter transparency here. But what about the performance penalty?Some benchmark would be nice to answer this.On Fri, Dec 20, 2019 at 6:48 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Dec 20, 2019 at 08:57:04AM -0500, Bruce Momjian wrote:\n> I can imagine using larger pgstat_track_activity_query_size values for\n> data warehouse queries, where they are long and there are only a few of\n> them.\n\nWhy are those queries that long anyway? A too long IN clause with an\ninsane amount of parameters which could be replaced by an ANY clause\nwith an array?\n--\nMichael",
"msg_date": "Sat, 21 Dec 2019 12:45:05 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "Nikolay Samokhvalov <samokhvalov@gmail.com> writes:\n> Here is what ORMs do:\n> select length('SELECT \"column_name_1001\", \"column_name_1002\",\n> \"column_name_1003\", \"column_name_1004\", \"column_name_1005\",\n> \"column_name_1006\", \"column_name_1007\", \"column_name_1008\",\n> \"column_name_1009\", \"column_name_1010\", \"column_name_1011\",\n> \"column_name_1012\", \"column_name_1013\", \"column_name_1014\",\n> \"column_name_1015\", \"column_name_1016\", \"column_name_1017\",\n> \"column_name_1018\", \"column_name_1019\", \"column_name_1020\",\n> \"column_name_1021\", \"column_name_1022\", \"column_name_1023\",\n> \"column_name_1024\", \"column_name_1025\", \"column_name_1026\",\n> \"column_name_1027\", \"column_name_1028\", \"column_name_1029\",\n> \"column_name_1030\", \"column_name_1031\", \"column_name_1032\",\n> \"column_name_1033\", \"column_name_1034\", \"column_name_1035\",\n> \"column_name_1036\", \"column_name_1037\", \"column_name_1038\",\n> \"column_name_1039\", \"column_name_1040\", \"column_name_1041\",\n> \"column_name_1042\", \"column_name_1043\", \"column_name_1044\",\n> \"column_name_1045\", \"column_name_1046\", \"column_name_1047\",\n> \"column_name_1048\", \"column_name_1049\", \"column_name_1050\" FROM\n> \"some_table\";');\n> length\n> --------\n> 1024\n> (1 row)\n\n> That's it – with default settings, you won't see WHERE clause or\n> anything else.\n\nIf that's true, it doesn't offer much of a case for upping the limit\non track_activity_query_size. The longest such a query could reasonably\nget is somewhere near NAMEDATALEN times MaxHeapAttributeNumber, which\nas it happens is exactly the existing limit on track_activity_query_size.\n\n> As a result, many queries exceed track_activity_query_size\n\nHow? And if they are, why do you care? Such queries sure seem\npretty content-free.\n\n> What is the overhead here except the memory consumption?\n\nThe time to copy those strings out of shared storage, any time\nyou query pg_stat_activity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Dec 2019 16:25:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Sat, Dec 21, 2019 at 04:25:05PM -0500, Tom Lane wrote:\n>Nikolay Samokhvalov <samokhvalov@gmail.com> writes:\n>> Here is what ORMs do:\n>> select length('SELECT \"column_name_1001\", \"column_name_1002\",\n>> \"column_name_1003\", \"column_name_1004\", \"column_name_1005\",\n>> \"column_name_1006\", \"column_name_1007\", \"column_name_1008\",\n>> \"column_name_1009\", \"column_name_1010\", \"column_name_1011\",\n>> \"column_name_1012\", \"column_name_1013\", \"column_name_1014\",\n>> \"column_name_1015\", \"column_name_1016\", \"column_name_1017\",\n>> \"column_name_1018\", \"column_name_1019\", \"column_name_1020\",\n>> \"column_name_1021\", \"column_name_1022\", \"column_name_1023\",\n>> \"column_name_1024\", \"column_name_1025\", \"column_name_1026\",\n>> \"column_name_1027\", \"column_name_1028\", \"column_name_1029\",\n>> \"column_name_1030\", \"column_name_1031\", \"column_name_1032\",\n>> \"column_name_1033\", \"column_name_1034\", \"column_name_1035\",\n>> \"column_name_1036\", \"column_name_1037\", \"column_name_1038\",\n>> \"column_name_1039\", \"column_name_1040\", \"column_name_1041\",\n>> \"column_name_1042\", \"column_name_1043\", \"column_name_1044\",\n>> \"column_name_1045\", \"column_name_1046\", \"column_name_1047\",\n>> \"column_name_1048\", \"column_name_1049\", \"column_name_1050\" FROM\n>> \"some_table\";');\n>> length\n>> --------\n>> 1024\n>> (1 row)\n>\n>> That's it – with default settings, you won't see WHERE clause or\n>> anything else.\n>\n>If that's true, it doesn't offer much of a case for upping the limit\n>on track_activity_query_size. The longest such a query could reasonably\n>get is somewhere near NAMEDATALEN times MaxHeapAttributeNumber, which\n>as it happens is exactly the existing limit on track_activity_query_size.\n>\n>> As a result, many queries exceed track_activity_query_size\n>\n>How? And if they are, why do you care? Such queries sure seem\n>pretty content-free.\n>\n\nI believe the example was just a very simplistic example. ORMs can of\ncourse generate queries with joins, which can easily exceed the limit\nyou mentioned.\n\n>> What is the overhead here except the memory consumption?\n>\n>The time to copy those strings out of shared storage, any time\n>you query pg_stat_activity.\n>\n\nIMO that seems like a reasonable price to pay, if you want to see\ncomplete queries and bump the track_activity_query_size value up.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 22 Dec 2019 01:03:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 1:03 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sat, Dec 21, 2019 at 04:25:05PM -0500, Tom Lane wrote:\n> >Nikolay Samokhvalov <samokhvalov@gmail.com> writes:\n> >> Here is what ORMs do:\n> >> select length('SELECT \"column_name_1001\", \"column_name_1002\",\n> >> \"column_name_1003\", \"column_name_1004\", \"column_name_1005\",\n> >> \"column_name_1006\", \"column_name_1007\", \"column_name_1008\",\n> >> \"column_name_1009\", \"column_name_1010\", \"column_name_1011\",\n> >> \"column_name_1012\", \"column_name_1013\", \"column_name_1014\",\n> >> \"column_name_1015\", \"column_name_1016\", \"column_name_1017\",\n> >> \"column_name_1018\", \"column_name_1019\", \"column_name_1020\",\n> >> \"column_name_1021\", \"column_name_1022\", \"column_name_1023\",\n> >> \"column_name_1024\", \"column_name_1025\", \"column_name_1026\",\n> >> \"column_name_1027\", \"column_name_1028\", \"column_name_1029\",\n> >> \"column_name_1030\", \"column_name_1031\", \"column_name_1032\",\n> >> \"column_name_1033\", \"column_name_1034\", \"column_name_1035\",\n> >> \"column_name_1036\", \"column_name_1037\", \"column_name_1038\",\n> >> \"column_name_1039\", \"column_name_1040\", \"column_name_1041\",\n> >> \"column_name_1042\", \"column_name_1043\", \"column_name_1044\",\n> >> \"column_name_1045\", \"column_name_1046\", \"column_name_1047\",\n> >> \"column_name_1048\", \"column_name_1049\", \"column_name_1050\" FROM\n> >> \"some_table\";');\n> >> length\n> >> --------\n> >> 1024\n> >> (1 row)\n> >\n> >> That's it – with default settings, you won't see WHERE clause or\n> >> anything else.\n> >\n> >If that's true, it doesn't offer much of a case for upping the limit\n> >on track_activity_query_size. The longest such a query could reasonably\n> >get is somewhere near NAMEDATALEN times MaxHeapAttributeNumber, which\n> >as it happens is exactly the existing limit on track_activity_query_size.\n> >\n> >> As a result, many queries exceed track_activity_query_size\n> >\n> >How? And if they are, why do you care? Such queries sure seem\n> >pretty content-free.\n> >\n>\n> I believe the example was just a very simplistic example. ORMs can of\n> course generate queries with joins, which can easily exceed the limit\n> you mentioned.\n>\n> >> What is the overhead here except the memory consumption?\n> >\n> >The time to copy those strings out of shared storage, any time\n> >you query pg_stat_activity.\n> >\n>\n> IMO that seems like a reasonable price to pay, if you want to see\n> complete queries and bump the track_activity_query_size value up.\n\nCouldn't be pg_stat_statements (or any similar extension) queryid\nexposure in pg_stat_activity [1] also an alternative? You wouldn't\nhave the parameters but maybe the normalized query would be enough for\nmost analysis. Now, maybe pg_stat_statements jumble overhead for such\nlarge statements would be even more problematic.\n\n[1] https://commitfest.postgresql.org/26/2069/\n\n\n",
"msg_date": "Sun, 22 Dec 2019 09:06:41 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 09:06:41AM +0100, Julien Rouhaud wrote:\n>On Sun, Dec 22, 2019 at 1:03 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sat, Dec 21, 2019 at 04:25:05PM -0500, Tom Lane wrote:\n>> >Nikolay Samokhvalov <samokhvalov@gmail.com> writes:\n>> >> Here is what ORMs do:\n>> >> select length('SELECT \"column_name_1001\", \"column_name_1002\",\n>> >> \"column_name_1003\", \"column_name_1004\", \"column_name_1005\",\n>> >> \"column_name_1006\", \"column_name_1007\", \"column_name_1008\",\n>> >> \"column_name_1009\", \"column_name_1010\", \"column_name_1011\",\n>> >> \"column_name_1012\", \"column_name_1013\", \"column_name_1014\",\n>> >> \"column_name_1015\", \"column_name_1016\", \"column_name_1017\",\n>> >> \"column_name_1018\", \"column_name_1019\", \"column_name_1020\",\n>> >> \"column_name_1021\", \"column_name_1022\", \"column_name_1023\",\n>> >> \"column_name_1024\", \"column_name_1025\", \"column_name_1026\",\n>> >> \"column_name_1027\", \"column_name_1028\", \"column_name_1029\",\n>> >> \"column_name_1030\", \"column_name_1031\", \"column_name_1032\",\n>> >> \"column_name_1033\", \"column_name_1034\", \"column_name_1035\",\n>> >> \"column_name_1036\", \"column_name_1037\", \"column_name_1038\",\n>> >> \"column_name_1039\", \"column_name_1040\", \"column_name_1041\",\n>> >> \"column_name_1042\", \"column_name_1043\", \"column_name_1044\",\n>> >> \"column_name_1045\", \"column_name_1046\", \"column_name_1047\",\n>> >> \"column_name_1048\", \"column_name_1049\", \"column_name_1050\" FROM\n>> >> \"some_table\";');\n>> >> length\n>> >> --------\n>> >> 1024\n>> >> (1 row)\n>> >\n>> >> That's it – with default settings, you won't see WHERE clause or\n>> >> anything else.\n>> >\n>> >If that's true, it doesn't offer much of a case for upping the limit\n>> >on track_activity_query_size. The longest such a query could reasonably\n>> >get is somewhere near NAMEDATALEN times MaxHeapAttributeNumber, which\n>> >as it happens is exactly the existing limit on track_activity_query_size.\n>> >\n>> >> As a result, many queries exceed track_activity_query_size\n>> >\n>> >How? And if they are, why do you care? Such queries sure seem\n>> >pretty content-free.\n>> >\n>>\n>> I believe the example was just a very simplistic example. ORMs can of\n>> course generate queries with joins, which can easily exceed the limit\n>> you mentioned.\n>>\n>> >> What is the overhead here except the memory consumption?\n>> >\n>> >The time to copy those strings out of shared storage, any time\n>> >you query pg_stat_activity.\n>> >\n>>\n>> IMO that seems like a reasonable price to pay, if you want to see\n>> complete queries and bump the track_activity_query_size value up.\n>\n>Couldn't be pg_stat_statements (or any similar extension) queryid\n>exposure in pg_stat_activity [1] also an alternative? You wouldn't\n>have the parameters but maybe the normalized query would be enough for\n>most analysis. Now, maybe pg_stat_statements jumble overhead for such\n>large statements would be even more problematic.\n>\n\nBut that would effectively add dependency on pg_stat_statements, no? I\ndon't think we want that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 23 Dec 2019 13:10:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 1:10 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Dec 22, 2019 at 09:06:41AM +0100, Julien Rouhaud wrote:\n> >On Sun, Dec 22, 2019 at 1:03 AM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> On Sat, Dec 21, 2019 at 04:25:05PM -0500, Tom Lane wrote:\n> >> >> What is the overhead here except the memory consumption?\n> >> >\n> >> >The time to copy those strings out of shared storage, any time\n> >> >you query pg_stat_activity.\n> >> >\n> >>\n> >> IMO that seems like a reasonable price to pay, if you want to see\n> >> complete queries and bump the track_activity_query_size value up.\n> >\n> >Couldn't be pg_stat_statements (or any similar extension) queryid\n> >exposure in pg_stat_activity [1] also an alternative? You wouldn't\n> >have the parameters but maybe the normalized query would be enough for\n> >most analysis. Now, maybe pg_stat_statements jumble overhead for such\n> >large statements would be even more problematic.\n> >\n>\n> But that would effectively add dependency on pg_stat_statements, no? I\n> don't think we want that.\n\nThe queryid field is part of the core, so no dependency is added. You\njust get a somewhat useless NULL value returned until you load an\nextension that compute a queryid, which may be pg_stat_statements but\nany other one will work too.\n\n\n",
"msg_date": "Mon, 23 Dec 2019 13:33:31 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Sat, Dec 21, 2019 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What is the overhead here except the memory consumption?\n>\n> The time to copy those strings out of shared storage, any time\n> you query pg_stat_activity.\n\nIt seems like you're masterminding this, and I don't know why. It\nseems unlikely that anyone will raise the value unless they have very\nlong queries, and if those people would rather pay the overhead of\ncopying more data than have their queries truncated, who are we to\nargue?\n\nIf increasing the maximum imposed some noticeable cost on\ninstallations the kept the default setting, that might well be a good\nargument for not raising the maximum. But I don't think that's the\ncase. I also suspect that the overhead would be pretty darn small even\nfor people who *do* raise the default setting. It looks to me like\nboth reading and write operations on st_activity_raw stop when they\nhit a NUL byte, so any performance costs on short queries must come\nfrom second-order effects (e.g. the main shared memory segment is\nbigger, so the OS cache is smaller) which are likely irrelevant in\npractice.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 Dec 2019 21:10:43 -0800",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 12:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Dec 21, 2019 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > What is the overhead here except the memory consumption?\n> >\n> > The time to copy those strings out of shared storage, any time\n> > you query pg_stat_activity.\n>\n> It seems like you're masterminding this, and I don't know why. It\n> seems unlikely that anyone will raise the value unless they have very\n> long queries, and if those people would rather pay the overhead of\n> copying more data than have their queries truncated, who are we to\n> argue?\n>\n\n+1\n\nCheers,\n\nJeff\n\nOn Tue, Dec 24, 2019 at 12:11 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Dec 21, 2019 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What is the overhead here except the memory consumption?\n>\n> The time to copy those strings out of shared storage, any time\n> you query pg_stat_activity.\n\nIt seems like you're masterminding this, and I don't know why. It\nseems unlikely that anyone will raise the value unless they have very\nlong queries, and if those people would rather pay the overhead of\ncopying more data than have their queries truncated, who are we to\nargue?+1Cheers,Jeff",
"msg_date": "Sun, 29 Dec 2019 12:46:50 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 9:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Dec 21, 2019 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > What is the overhead here except the memory consumption?\n> >\n> > The time to copy those strings out of shared storage, any time\n> > you query pg_stat_activity.\n>\n> It seems like you're masterminding this, and I don't know why. It\n> seems unlikely that anyone will raise the value unless they have very\n> long queries, and if those people would rather pay the overhead of\n> copying more data than have their queries truncated, who are we to\n> argue?\n>\n> If increasing the maximum imposed some noticeable cost on\n> installations the kept the default setting, that might well be a good\n> argument for not raising the maximum. But I don't think that's the\n> case. I also suspect that the overhead would be pretty darn small even\n> for people who *do* raise the default setting. It looks to me like\n> both reading and write operations on st_activity_raw stop when they\n> hit a NUL byte, so any performance costs on short queries must come\n> from second-order effects (e.g. the main shared memory segment is\n> bigger, so the OS cache is smaller) which are likely irrelevant in\n> practice.\n>\n\nI'm generally in favor of the idea of allowing people to make\ntrade-offs that best work for them, but Tom's concern does give me\npause, because it isn't clear to me how people will measure the\noverhead of upping this setting. If given the option people will\nalmost certainly start raising this limit because the benefits are\nobvious (\"I can see all my query now!\") but so far the explanation of\nthe downsides have been either hand-wavy or, in the case of your\nsecond paragraph, an argument they are non-existent, which doesn't\nseem right either; so how do we explain to people how to measure the\noverhead for them?\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Mon, 30 Dec 2019 12:46:40 -0800",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 12:46:40PM -0800, Robert Treat wrote:\n>On Mon, Dec 23, 2019 at 9:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Sat, Dec 21, 2019 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > > What is the overhead here except the memory consumption?\n>> >\n>> > The time to copy those strings out of shared storage, any time\n>> > you query pg_stat_activity.\n>>\n>> It seems like you're masterminding this, and I don't know why. It\n>> seems unlikely that anyone will raise the value unless they have very\n>> long queries, and if those people would rather pay the overhead of\n>> copying more data than have their queries truncated, who are we to\n>> argue?\n>>\n>> If increasing the maximum imposed some noticeable cost on\n>> installations the kept the default setting, that might well be a good\n>> argument for not raising the maximum. But I don't think that's the\n>> case. I also suspect that the overhead would be pretty darn small even\n>> for people who *do* raise the default setting. It looks to me like\n>> both reading and write operations on st_activity_raw stop when they\n>> hit a NUL byte, so any performance costs on short queries must come\n>> from second-order effects (e.g. the main shared memory segment is\n>> bigger, so the OS cache is smaller) which are likely irrelevant in\n>> practice.\n>>\n>\n>I'm generally in favor of the idea of allowing people to make\n>trade-offs that best work for them, but Tom's concern does give me\n>pause, because it isn't clear to me how people will measure the\n>overhead of upping this setting. If given the option people will\n>almost certainly start raising this limit because the benefits are\n>obvious (\"I can see all my query now!\") but so far the explanation of\n>the downsides have been either hand-wavy or, in the case of your\n>second paragraph, an argument they are non-existent, which doesn't\n>seem right either; so how do we explain to people how to measure the\n>overhead for them?\n>\n\nI think there are two questions that we need to answer:\n\n1) Does allowing higher values for the GUC mean overhead for people who\ndon't actually increase it?\n\nI don't think so.\n\n2) What's the overhead for increasing the value for short/long queries?\n\nMy assumption is that for short queries, it's going to be negligible.\nFor longer queries it may be measurable, but I'd expect longer queries\nto be more expensive in general, so maybe it's still negligible.\n\nOf course, the easiest thing we can do is actually measuring this.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Dec 2019 22:33:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> 2) What's the overhead for increasing the value for short/long queries?\n\n> My assumption is that for short queries, it's going to be negligible.\n> For longer queries it may be measurable, but I'd expect longer queries\n> to be more expensive in general, so maybe it's still negligible.\n\nThe thing that has been bothering me is the idea that backends reading\nst_activity_raw might palloc the max possible length and/or memcpy the\nwhole buffer rather than just the valid part. Having now rooted through\npgstat.c, that appears to be half true: the local allocation made by\npgstat_read_current_status() will be just as large as the shared-memory\narena, but we use strcpy() or equivalent so that each query copy should\nstop upon hitting a '\\0'. So the run-time cost should be negligible, but\nyou might be eating a lot of memory if multiple sessions are inspecting\npg_stat_activity and you cranked the setting up imprudently high.\n\nThis doesn't seem like a reason not to allow a higher limit, like a\nmegabyte or so, but I'm not sure that pushing it to the moon would be\nwise.\n\nMeanwhile, I noted what seems like a pretty obvious bug in\npg_stat_get_backend_activity():\n\n\tclipped_activity = pgstat_clip_activity(activity);\n\tret = cstring_to_text(activity);\n\tpfree(clipped_activity);\n\nWe're not actually applying the intended clip to the returned\nvalue, so that an invalidly-encoded result is possible.\n\n(Of course, since we also don't seem to be making any attempt\nto translate from the source backend's encoding to our own,\nthere's more problems here than just that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Dec 2019 18:08:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> This doesn't seem like a reason not to allow a higher limit, like a\n> megabyte or so, but I'm not sure that pushing it to the moon would be\n> wise.\n>\n\n\nJust to get a mental handle on the size of queries we might be\nallowing before truncation, I did some very rough arithmetic on what\nwell known texts might fit in a megabyte. By my calculations you could\nfit about four Animal Farms or one Madame Bovary in about a megabyte.\nSo I think that seems like more than enough :-). (My mind kinda\nexplores at the thought of debugging a query as long as Animal Farm.)\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 10:16:15 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 10:16 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n> On Tue, Dec 31, 2019 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> >\n> > This doesn't seem like a reason not to allow a higher limit, like a\n> > megabyte or so, but I'm not sure that pushing it to the moon would be\n> > wise.\n> >\n>\n>\n> Just to get a mental handle on the size of queries we might be\n> allowing before truncation, I did some very rough arithmetic on what\n> well known texts might fit in a megabyte. By my calculations you could\n> fit about four Animal Farms or one Madame Bovary in about a megabyte.\n> So I think that seems like more than enough :-). (My mind kinda\n> explores at the thought of debugging a query as long as Animal Farm.)\n>\n\n\nTurns out my arithmetic was a bit off. Animal Farm is 90 kb, Madame\nBovary 678 Kb.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 10:25:26 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On 2019-Dec-31, Andrew Dunstan wrote:\n\n> Turns out my arithmetic was a bit off. Animal Farm is 90 kb, Madame\n> Bovary 678 Kb.\n\nTen animal farms should be enough for everybody.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Dec 2019 21:10:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 10:25:26AM +1030, Andrew Dunstan wrote:\n>On Tue, Dec 31, 2019 at 10:16 AM Andrew Dunstan\n><andrew.dunstan@2ndquadrant.com> wrote:\n>>\n>> On Tue, Dec 31, 2019 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> >\n>> > This doesn't seem like a reason not to allow a higher limit, like a\n>> > megabyte or so, but I'm not sure that pushing it to the moon would be\n>> > wise.\n>> >\n>>\n>>\n>> Just to get a mental handle on the size of queries we might be\n>> allowing before truncation, I did some very rough arithmetic on what\n>> well known texts might fit in a megabyte. By my calculations you could\n>> fit about four Animal Farms or one Madame Bovary in about a megabyte.\n>> So I think that seems like more than enough :-). (My mind kinda\n>> explores at the thought of debugging a query as long as Animal Farm.)\n>>\n>\n>\n>Turns out my arithmetic was a bit off. Animal Farm is 90 kb, Madame\n>Bovary 678 Kb.\n>\n\nNot sure, but the Animal Farm text I found is about ~450kB (~120 pages,\nwith ~3kB per page) ...\n\nAnyway, the longest queries I personally saw in production were a couple\nof kB long (~32kB IIRC, it's been a couple years ago). The queries were\ngenerated by the application (not quite a traditional ORM, but something\nlike it), with long identifiers (e.g. table names) pretty long due to\nincluding a hash (so being 63 characters most of the time). Plus the\ncolumns were always fully qualified, with multiple joins etc.\n\nNot sure what a good limit would be. Obviously, if we pick value X, the\nnext day someone will come asking for X+1 ... ;-)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 01:46:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 11:16 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Tue, Dec 31, 2019 at 10:25:26AM +1030, Andrew Dunstan wrote:\n> >On Tue, Dec 31, 2019 at 10:16 AM Andrew Dunstan\n> ><andrew.dunstan@2ndquadrant.com> wrote:\n> >>\n> >> On Tue, Dec 31, 2019 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> >\n> >> > This doesn't seem like a reason not to allow a higher limit, like a\n> >> > megabyte or so, but I'm not sure that pushing it to the moon would be\n> >> > wise.\n> >> >\n> >>\n> >>\n> >> Just to get a mental handle on the size of queries we might be\n> >> allowing before truncation, I did some very rough arithmetic on what\n> >> well known texts might fit in a megabyte. By my calculations you could\n> >> fit about four Animal Farms or one Madame Bovary in about a megabyte.\n> >> So I think that seems like more than enough :-). (My mind kinda\n> >> explores at the thought of debugging a query as long as Animal Farm.)\n> >>\n> >\n> >\n> >Turns out my arithmetic was a bit off. Animal Farm is 90 kb, Madame\n> >Bovary 678 Kb.\n> >\n>\n> Not sure, but the Animal Farm text I found is about ~450kB (~120 pages,\n> with ~3kB per page) ...\n\n\nMy browser has led me astray.\nhttp://gutenberg.net.au/ebooks01/0100011.txt is in fact 172618 bytes.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 12:17:08 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 6:46 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n> On Tue, Dec 31, 2019 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> >\n> > This doesn't seem like a reason not to allow a higher limit, like a\n> > megabyte or so, but I'm not sure that pushing it to the moon would be\n> > wise.\n> >\n>\n>\n> Just to get a mental handle on the size of queries we might be\n> allowing before truncation, I did some very rough arithmetic on what\n> well known texts might fit in a megabyte. By my calculations you could\n> fit about four Animal Farms or one Madame Bovary in about a megabyte.\n> So I think that seems like more than enough :-). (My mind kinda\n> explores at the thought of debugging a query as long as Animal Farm.)\n>\n>\nI've seen some pretty big IN-lists and VALUES lists. They aren't so hard\nto debug once you tune out iterations 3 through N-3 of the list members.\nUnless they are hard to debug for other reasons. In these cases, it would\nbe helpful, if not just allowing bigger texts in general, to instead\n\"truncate\" from the middle, preserving both the beginning and the end of\nthe query text.\n\nCheers,\n\nJeff\n\nOn Mon, Dec 30, 2019 at 6:46 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:On Tue, Dec 31, 2019 at 9:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> This doesn't seem like a reason not to allow a higher limit, like a\n> megabyte or so, but I'm not sure that pushing it to the moon would be\n> wise.\n>\n\n\nJust to get a mental handle on the size of queries we might be\nallowing before truncation, I did some very rough arithmetic on what\nwell known texts might fit in a megabyte. By my calculations you could\nfit about four Animal Farms or one Madame Bovary in about a megabyte.\nSo I think that seems like more than enough :-). (My mind kinda\nexplores at the thought of debugging a query as long as Animal Farm.)\nI've seen some pretty big IN-lists and VALUES lists. They aren't so hard to debug once you tune out iterations 3 through N-3 of the list members. Unless they are hard to debug for other reasons. In these cases, it would be helpful, if not just allowing bigger texts in general, to instead \"truncate\" from the middle, preserving both the beginning and the end of the query text.Cheers,Jeff",
"msg_date": "Thu, 2 Jan 2020 15:27:20 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 3:27 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> I've seen some pretty big IN-lists and VALUES lists. They aren't so hard to debug once you tune out iterations 3 through N-3 of the list members. Unless they are hard to debug for other reasons. In these cases, it would be helpful, if not just allowing bigger texts in general, to instead \"truncate\" from the middle, preserving both the beginning and the end of the query text.\n\nI vote for not trying to make this more complicated and just accepting\nthe original proposal. It's about a factor of ten increase over the\nlimit we have right now, which doesn't seem like enough to cause any\nreal breakage, and it should be enough to satisfy the majority of the\npeople who are unhappy with the current limit, and it is very little\nwork. If somebody wants to do more work on this later, they can, but I\ndon't think the OP should be on the hook for that.\n\nAt some point, someone (I think Peter Geoghegan) suggested that\npg_stat_statements ought to normalize IN lists down to a single\nelement. That kind of thing might be another approach to the problem\nyou mention. It's a bit easier to imagine doing such a thing from a\ntool like that than it is to do it for strings in pg_stat_activity\nbecause pg_stat_statements has got a parse tree to work with, not just\na flat sting. And that might work more nicely than just keeping the\nbeginning and end of the string, but of course it's also more\ncomplicated, so I don't know.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 13:36:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I vote for not trying to make this more complicated and just accepting\n> the original proposal. It's about a factor of ten increase over the\n> limit we have right now, which doesn't seem like enough to cause any\n> real breakage, and it should be enough to satisfy the majority of the\n> people who are unhappy with the current limit, and it is very little\n> work.\n\n+1 ... we've surely beaten this topic to death by now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 13:48:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 01:48:56PM -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I vote for not trying to make this more complicated and just accepting\n>> the original proposal. It's about a factor of ten increase over the\n>> limit we have right now, which doesn't seem like enough to cause any\n>> real breakage, and it should be enough to satisfy the majority of the\n>> people who are unhappy with the current limit, and it is very little\n>> work.\n> \n> +1 ... we've surely beaten this topic to death by now.\n\nSounds like an agreement then. The original patch documents the range\nin postgresql.conf.sample, which is fine by me as this is done for\nsome parameters, and skips the part about doc/, which also matches\nwith the surrounding effort for other parameters, so the whole looks\nfine seen from here. Anybody willing to commit that?\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 13:01:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 11:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Sounds like an agreement then. The original patch documents the range\n> in postgresql.conf.sample, which is fine by me as this is done for\n> some parameters, and skips the part about doc/, which also matches\n> with the surrounding effort for other parameters, so the whole looks\n> fine seen from here. Anybody willing to commit that?\n\nDone. I didn't commit the postgresql.conf.sample change because:\n\n(1) I think Bruce voted against it.\n\n(2) It makes the line a little wide, and I'd rather not do that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jan 2020 12:21:26 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
},
{
"msg_contents": "On Tue, Jan 07, 2020 at 12:21:26PM -0500, Robert Haas wrote:\n> Done. I didn't commit the postgresql.conf.sample change because:\n> \n> (1) I think Bruce voted against it.\n> \n> (2) It makes the line a little wide, and I'd rather not do that.\n\nThanks!\n--\nMichael",
"msg_date": "Wed, 8 Jan 2020 10:56:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Increase the maximum value track_activity_query_size"
}
] |
[
{
"msg_contents": "Hi\n\nI have a report from my customer about migration his application from\nOracle to Postgres.\n\nThe most significant issue was missing correct estimation for coalesce\nfunction. He had to rewrite coalesce(var, X) = X to \"var IS NULL or var =\nX\". Then the result was very satisfactory.\n\nExample:\n\ncreate table xxx(a int);\ninsert into xxx select null from generate_series(1,10000);\ninsert into xxx select 1 from generate_series(1,1000);\ninsert into xxx select 0 from generate_series(1,1000);\nanalyze xxx;\n\npostgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------\n Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual\ntime=0.041..4.276 rows=11000 loops=1)\n Filter: (COALESCE(a, 0) = 0)\n Rows Removed by Filter: 1000\n Planning Time: 0.099 ms\n Execution Time: 5.412 ms\n(5 rows)\n\npostgres=# explain analyze select * from xxx where a is null or a = 0;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------\n Seq Scan on xxx (cost=0.00..194.00 rows=10167 width=4) (actual\ntime=0.052..5.891 rows=11000 loops=1)\n Filter: ((a IS NULL) OR (a = 0))\n Rows Removed by Filter: 1000\n Planning Time: 0.136 ms\n Execution Time: 7.522 ms\n(5 rows)\n\nI think so pattern coalesce(var, X) = X is very common so can be very\ninteresting to support it better.\n\nRegards\n\nPavel\n\nHiI have a report from my customer about migration his application from Oracle to Postgres.The most significant issue was missing correct estimation for coalesce function. He had to rewrite coalesce(var, X) = X to \"var IS NULL or var = X\". Then the result was very satisfactory.Example:create table xxx(a int);insert into xxx select null from generate_series(1,10000);insert into xxx select 1 from generate_series(1,1000);insert into xxx select 0 from generate_series(1,1000);analyze xxx;postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0; QUERY PLAN ---------------------------------------------------------------------------------------------------- Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual time=0.041..4.276 rows=11000 loops=1) Filter: (COALESCE(a, 0) = 0) Rows Removed by Filter: 1000 Planning Time: 0.099 ms Execution Time: 5.412 ms(5 rows)postgres=# explain analyze select * from xxx where a is null or a = 0; QUERY PLAN ------------------------------------------------------------------------------------------------------- Seq Scan on xxx (cost=0.00..194.00 rows=10167 width=4) (actual time=0.052..5.891 rows=11000 loops=1) Filter: ((a IS NULL) OR (a = 0)) Rows Removed by Filter: 1000 Planning Time: 0.136 ms Execution Time: 7.522 ms(5 rows)I think so pattern coalesce(var, X) = X is very common so can be very interesting to support it better. RegardsPavel",
"msg_date": "Wed, 27 Nov 2019 08:47:56 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "missing estimation for coalesce function"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 08:47:56AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> I have a report from my customer about migration his application from\n> Oracle to Postgres.\n> \n> The most significant issue was missing correct estimation for coalesce\n> function. He had to rewrite coalesce(var, X) = X to \"var IS NULL or var =\n> X\". Then the result was very satisfactory.\n> \n> Example:\n> \n> create table xxx(a int);\n> insert into xxx select null from generate_series(1,10000);\n> insert into xxx select 1 from generate_series(1,1000);\n> insert into xxx select 0 from generate_series(1,1000);\n> analyze xxx;\n> \n> postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\n> QUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------\n> Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual\n> time=0.041..4.276 rows=11000 loops=1)\n> Filter: (COALESCE(a, 0) = 0)\n> Rows Removed by Filter: 1000\n> Planning Time: 0.099 ms\n> Execution Time: 5.412 ms\n> (5 rows)\n> \n> postgres=# explain analyze select * from xxx where a is null or a = 0;\n> QUERY PLAN\n> \n> -------------------------------------------------------------------------------------------------------\n> Seq Scan on xxx (cost=0.00..194.00 rows=10167 width=4) (actual\n> time=0.052..5.891 rows=11000 loops=1)\n> Filter: ((a IS NULL) OR (a = 0))\n> Rows Removed by Filter: 1000\n> Planning Time: 0.136 ms\n> Execution Time: 7.522 ms\n> (5 rows)\n> \n> I think so pattern coalesce(var, X) = X is very common so can be very\n> interesting to support it better.\n\nBetter support sounds great!\n\nHow specifically might this be better supported? On this relatively\nshort table, I see planning times considerably longer, I assume\nbecause they need to take a function call into account, and execution\ntimes longer but not all that much longer. I tried with 3 million\nrows, and got the representative samples below:\n\nshackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT * FROM xxx WHERE COALESCE(a, 0)=0;\n QUERY PLAN \n══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\n Gather (cost=1000.00..30391.00 rows=15000 width=4) (actual time=1.315..346.406 rows=999772 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on xxx (cost=0.00..27891.00 rows=6250 width=4) (actual time=0.029..216.419 rows=333257 loops=3)\n Filter: (COALESCE(a, 0) = 0)\n Rows Removed by Filter: 666743\n Planning Time: 0.204 ms\n Execution Time: 389.307 ms\n(8 rows)\n\nTime: 391.394 ms\n\nshackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT * FROM xxx WHERE a IS NULL OR a = 0;\n QUERY PLAN \n═════════════════════════════════════════════════════════════════════════════════════════════════════════════\n Seq Scan on xxx (cost=0.00..49766.00 rows=995700 width=4) (actual time=0.043..524.401 rows=999772 loops=1)\n Filter: ((a IS NULL) OR (a = 0))\n Rows Removed by Filter: 2000228\n Planning Time: 0.106 ms\n Execution Time: 560.593 ms\n(5 rows)\n\nTime: 561.186 ms\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 28 Nov 2019 03:56:47 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: missing estimation for coalesce function"
},
{
"msg_contents": "Hi\n\nčt 28. 11. 2019 v 3:56 odesílatel David Fetter <david@fetter.org> napsal:\n\n> On Wed, Nov 27, 2019 at 08:47:56AM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > I have a report from my customer about migration his application from\n> > Oracle to Postgres.\n> >\n> > The most significant issue was missing correct estimation for coalesce\n> > function. He had to rewrite coalesce(var, X) = X to \"var IS NULL or var =\n> > X\". Then the result was very satisfactory.\n> >\n> > Example:\n> >\n> > create table xxx(a int);\n> > insert into xxx select null from generate_series(1,10000);\n> > insert into xxx select 1 from generate_series(1,1000);\n> > insert into xxx select 0 from generate_series(1,1000);\n> > analyze xxx;\n> >\n> > postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\n> > QUERY PLAN\n> >\n> >\n> ----------------------------------------------------------------------------------------------------\n> > Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual\n> > time=0.041..4.276 rows=11000 loops=1)\n> > Filter: (COALESCE(a, 0) = 0)\n> > Rows Removed by Filter: 1000\n> > Planning Time: 0.099 ms\n> > Execution Time: 5.412 ms\n> > (5 rows)\n> >\n> > postgres=# explain analyze select * from xxx where a is null or a = 0;\n> > QUERY PLAN\n> >\n> >\n> -------------------------------------------------------------------------------------------------------\n> > Seq Scan on xxx (cost=0.00..194.00 rows=10167 width=4) (actual\n> > time=0.052..5.891 rows=11000 loops=1)\n> > Filter: ((a IS NULL) OR (a = 0))\n> > Rows Removed by Filter: 1000\n> > Planning Time: 0.136 ms\n> > Execution Time: 7.522 ms\n> > (5 rows)\n> >\n> > I think so pattern coalesce(var, X) = X is very common so can be very\n> > interesting to support it better.\n>\n> Better support sounds great!\n>\n> How specifically might this be better supported? On this relatively\n> short table, I see planning times considerably longer, I assume\n> because they need to take a function call into account, and execution\n> times longer but not all that much longer. I tried with 3 million\n> rows, and got the representative samples below:\n>\n> shackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT *\n> FROM xxx WHERE COALESCE(a, 0)=0;\n> QUERY PLAN\n>\n>\n> ══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\n> Gather (cost=1000.00..30391.00 rows=15000 width=4) (actual\n> time=1.315..346.406 rows=999772 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel Seq Scan on xxx (cost=0.00..27891.00 rows=6250 width=4)\n> (actual time=0.029..216.419 rows=333257 loops=3)\n> Filter: (COALESCE(a, 0) = 0)\n> Rows Removed by Filter: 666743\n> Planning Time: 0.204 ms\n> Execution Time: 389.307 ms\n> (8 rows)\n>\n> Time: 391.394 ms\n>\n> shackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT *\n> FROM xxx WHERE a IS NULL OR a = 0;\n> QUERY PLAN\n>\n>\n> ═════════════════════════════════════════════════════════════════════════════════════════════════════════════\n> Seq Scan on xxx (cost=0.00..49766.00 rows=995700 width=4) (actual\n> time=0.043..524.401 rows=999772 loops=1)\n> Filter: ((a IS NULL) OR (a = 0))\n> Rows Removed by Filter: 2000228\n> Planning Time: 0.106 ms\n> Execution Time: 560.593 ms\n> (5 rows)\n>\n> Time: 561.186 ms\n>\n\nI didn't thing about rewriting. The correct solution should be via own\nselectivity function. Now for coalesce is used 5% estimation (like for\nother functions). Probably it should not be hard code because coalesce is a\nnode already. But it is part of code that I never modified.\n\nPavel\n\n>\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\nHičt 28. 11. 2019 v 3:56 odesílatel David Fetter <david@fetter.org> napsal:On Wed, Nov 27, 2019 at 08:47:56AM +0100, Pavel Stehule wrote:\r\n> Hi\r\n> \r\n> I have a report from my customer about migration his application from\r\n> Oracle to Postgres.\r\n> \r\n> The most significant issue was missing correct estimation for coalesce\r\n> function. He had to rewrite coalesce(var, X) = X to \"var IS NULL or var =\r\n> X\". Then the result was very satisfactory.\r\n> \r\n> Example:\r\n> \r\n> create table xxx(a int);\r\n> insert into xxx select null from generate_series(1,10000);\r\n> insert into xxx select 1 from generate_series(1,1000);\r\n> insert into xxx select 0 from generate_series(1,1000);\r\n> analyze xxx;\r\n> \r\n> postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\r\n> QUERY PLAN\r\n> \r\n> ----------------------------------------------------------------------------------------------------\r\n> Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual\r\n> time=0.041..4.276 rows=11000 loops=1)\r\n> Filter: (COALESCE(a, 0) = 0)\r\n> Rows Removed by Filter: 1000\r\n> Planning Time: 0.099 ms\r\n> Execution Time: 5.412 ms\r\n> (5 rows)\r\n> \r\n> postgres=# explain analyze select * from xxx where a is null or a = 0;\r\n> QUERY PLAN\r\n> \r\n> -------------------------------------------------------------------------------------------------------\r\n> Seq Scan on xxx (cost=0.00..194.00 rows=10167 width=4) (actual\r\n> time=0.052..5.891 rows=11000 loops=1)\r\n> Filter: ((a IS NULL) OR (a = 0))\r\n> Rows Removed by Filter: 1000\r\n> Planning Time: 0.136 ms\r\n> Execution Time: 7.522 ms\r\n> (5 rows)\r\n> \r\n> I think so pattern coalesce(var, X) = X is very common so can be very\r\n> interesting to support it better.\n\r\nBetter support sounds great!\n\r\nHow specifically might this be better supported? On this relatively\r\nshort table, I see planning times considerably longer, I assume\r\nbecause they need to take a function call into account, and execution\r\ntimes longer but not all that much longer. I tried with 3 million\r\nrows, and got the representative samples below:\n\r\nshackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT * FROM xxx WHERE COALESCE(a, 0)=0;\r\n QUERY PLAN \r\n══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n Gather (cost=1000.00..30391.00 rows=15000 width=4) (actual time=1.315..346.406 rows=999772 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel Seq Scan on xxx (cost=0.00..27891.00 rows=6250 width=4) (actual time=0.029..216.419 rows=333257 loops=3)\r\n Filter: (COALESCE(a, 0) = 0)\r\n Rows Removed by Filter: 666743\r\n Planning Time: 0.204 ms\r\n Execution Time: 389.307 ms\r\n(8 rows)\n\r\nTime: 391.394 ms\n\r\nshackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT * FROM xxx WHERE a IS NULL OR a = 0;\r\n QUERY PLAN \r\n═════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n Seq Scan on xxx (cost=0.00..49766.00 rows=995700 width=4) (actual time=0.043..524.401 rows=999772 loops=1)\r\n Filter: ((a IS NULL) OR (a = 0))\r\n Rows Removed by Filter: 2000228\r\n Planning Time: 0.106 ms\r\n Execution Time: 560.593 ms\r\n(5 rows)\n\r\nTime: 561.186 msI didn't thing about rewriting. The correct solution should be via own selectivity function. Now for coalesce is used 5% estimation (like for other functions). Probably it should not be hard code because coalesce is a node already. But it is part of code that I never modified.Pavel\n\r\nBest,\r\nDavid.\r\n-- \r\nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\r\nPhone: +1 415 235 3778\n\r\nRemember to vote!\r\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Thu, 28 Nov 2019 04:48:54 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: missing estimation for coalesce function"
},
{
"msg_contents": "čt 28. 11. 2019 v 4:48 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\r\nnapsal:\r\n\r\n> Hi\r\n>\r\n> čt 28. 11. 2019 v 3:56 odesílatel David Fetter <david@fetter.org> napsal:\r\n>\r\n>> On Wed, Nov 27, 2019 at 08:47:56AM +0100, Pavel Stehule wrote:\r\n>> > Hi\r\n>> >\r\n>> > I have a report from my customer about migration his application from\r\n>> > Oracle to Postgres.\r\n>> >\r\n>> > The most significant issue was missing correct estimation for coalesce\r\n>> > function. He had to rewrite coalesce(var, X) = X to \"var IS NULL or var\r\n>> =\r\n>> > X\". Then the result was very satisfactory.\r\n>> >\r\n>> > Example:\r\n>> >\r\n>> > create table xxx(a int);\r\n>> > insert into xxx select null from generate_series(1,10000);\r\n>> > insert into xxx select 1 from generate_series(1,1000);\r\n>> > insert into xxx select 0 from generate_series(1,1000);\r\n>> > analyze xxx;\r\n>> >\r\n>> > postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\r\n>> > QUERY PLAN\r\n>> >\r\n>> >\r\n>> ----------------------------------------------------------------------------------------------------\r\n>> > Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual\r\n>> > time=0.041..4.276 rows=11000 loops=1)\r\n>> > Filter: (COALESCE(a, 0) = 0)\r\n>> > Rows Removed by Filter: 1000\r\n>> > Planning Time: 0.099 ms\r\n>> > Execution Time: 5.412 ms\r\n>> > (5 rows)\r\n>> >\r\n>> > postgres=# explain analyze select * from xxx where a is null or a = 0;\r\n>> > QUERY PLAN\r\n>> >\r\n>> >\r\n>> -------------------------------------------------------------------------------------------------------\r\n>> > Seq Scan on xxx (cost=0.00..194.00 rows=10167 width=4) (actual\r\n>> > time=0.052..5.891 rows=11000 loops=1)\r\n>> > Filter: ((a IS NULL) OR (a = 0))\r\n>> > Rows Removed by Filter: 1000\r\n>> > Planning Time: 0.136 ms\r\n>> > Execution Time: 7.522 ms\r\n>> > (5 rows)\r\n>> >\r\n>> > I think so pattern coalesce(var, X) = X is very common so can be very\r\n>> > interesting to support it better.\r\n>>\r\n>> Better support sounds great!\r\n>>\r\n>> How specifically might this be better supported? On this relatively\r\n>> short table, I see planning times considerably longer, I assume\r\n>> because they need to take a function call into account, and execution\r\n>> times longer but not all that much longer. I tried with 3 million\r\n>> rows, and got the representative samples below:\r\n>>\r\n>> shackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT *\r\n>> FROM xxx WHERE COALESCE(a, 0)=0;\r\n>> QUERY PLAN\r\n>>\r\n>>\r\n>> ══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n>> Gather (cost=1000.00..30391.00 rows=15000 width=4) (actual\r\n>> time=1.315..346.406 rows=999772 loops=1)\r\n>> Workers Planned: 2\r\n>> Workers Launched: 2\r\n>> -> Parallel Seq Scan on xxx (cost=0.00..27891.00 rows=6250 width=4)\r\n>> (actual time=0.029..216.419 rows=333257 loops=3)\r\n>> Filter: (COALESCE(a, 0) = 0)\r\n>> Rows Removed by Filter: 666743\r\n>> Planning Time: 0.204 ms\r\n>> Execution Time: 389.307 ms\r\n>> (8 rows)\r\n>>\r\n>> Time: 391.394 ms\r\n>>\r\n>> shackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT *\r\n>> FROM xxx WHERE a IS NULL OR a = 0;\r\n>> QUERY PLAN\r\n>>\r\n>>\r\n>> ═════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n>> Seq Scan on xxx (cost=0.00..49766.00 rows=995700 width=4) (actual\r\n>> time=0.043..524.401 rows=999772 loops=1)\r\n>> Filter: ((a IS NULL) OR (a = 0))\r\n>> Rows Removed by Filter: 2000228\r\n>> Planning Time: 0.106 ms\r\n>> Execution Time: 560.593 ms\r\n>> (5 rows)\r\n>>\r\n>> Time: 561.186 ms\r\n>>\r\n>\r\n> I didn't thing about rewriting. The correct solution should be via own\r\n> selectivity function. Now for coalesce is used 5% estimation (like for\r\n> other functions). Probably it should not be hard code because coalesce is a\r\n> node already. But it is part of code that I never modified.\r\n>\r\n\r\nbut support functions can be used\r\n\r\nhttps://www.cybertec-postgresql.com/en/optimizer-support-functions/\r\n\r\npostgres=# create table test(id integer);\r\nCREATE TABLE\r\npostgres=# insert into test select generate_series(1,100000);\r\nINSERT 0 100000\r\npostgres=# insert into test select null from generate_series(1,1000);\r\nINSERT 0 1000\r\npostgres=# analyze test;\r\nANALYZE\r\npostgres=# create index on test(id);\r\nCREATE INDEX\r\npostgres=# explain analyze select * from test where coalesce(id, 10) = 10;\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞═══════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Seq Scan on test (cost=0.00..1708.50 rows=505 width=4) (actual\r\ntime=0.062..18.370 rows=1001 loops=1) │\r\n│ Filter: (COALESCE(id, 10) = 10)\r\n │\r\n│ Rows Removed by Filter: 99999\r\n │\r\n│ Planning Time: 37.212 ms\r\n │\r\n│ Execution Time: 18.479 ms\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(5 rows)\r\n\r\npostgres=# explain analyze select * from test where id is null or id = 10;\r\n┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Bitmap Heap Scan on test (cost=24.30..482.35 rows=964 width=4) (actual\r\ntime=0.197..0.334 rows=1001 loops=1) │\r\n│ Recheck Cond: ((id IS NULL) OR (id = 10))\r\n │\r\n│ Heap Blocks: exact=5\r\n │\r\n│ -> BitmapOr (cost=24.30..24.30 rows=964 width=0) (actual\r\ntime=0.189..0.189 rows=0 loops=1) │\r\n│ -> Bitmap Index Scan on test_id_idx (cost=0.00..19.52 rows=963\r\nwidth=0) (actual time=0.170..0.170 rows=1000 loops=1) │\r\n│ Index Cond: (id IS NULL)\r\n │\r\n│ -> Bitmap Index Scan on test_id_idx (cost=0.00..4.30 rows=1\r\nwidth=0) (actual time=0.019..0.019 rows=1 loops=1) │\r\n│ Index Cond: (id = 10)\r\n │\r\n│ Planning Time: 0.090 ms\r\n │\r\n│ Execution Time: 0.413 ms\r\n │\r\n└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(10 rows)\r\n\r\nThere can be strong benefit from replacement if indexes are used.\r\n\r\n\r\n> Pavel\r\n>\r\n>>\r\n>> Best,\r\n>> David.\r\n>> --\r\n>> David Fetter <david(at)fetter(dot)org> http://fetter.org/\r\n>> Phone: +1 415 235 3778\r\n>>\r\n>> Remember to vote!\r\n>> Consider donating to Postgres: http://www.postgresql.org/about/donate\r\n>>\r\n>\r\n\nčt 28. 11. 2019 v 4:48 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hičt 28. 11. 2019 v 3:56 odesílatel David Fetter <david@fetter.org> napsal:On Wed, Nov 27, 2019 at 08:47:56AM +0100, Pavel Stehule wrote:\r\n> Hi\r\n> \r\n> I have a report from my customer about migration his application from\r\n> Oracle to Postgres.\r\n> \r\n> The most significant issue was missing correct estimation for coalesce\r\n> function. He had to rewrite coalesce(var, X) = X to \"var IS NULL or var =\r\n> X\". Then the result was very satisfactory.\r\n> \r\n> Example:\r\n> \r\n> create table xxx(a int);\r\n> insert into xxx select null from generate_series(1,10000);\r\n> insert into xxx select 1 from generate_series(1,1000);\r\n> insert into xxx select 0 from generate_series(1,1000);\r\n> analyze xxx;\r\n> \r\n> postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\r\n> QUERY PLAN\r\n> \r\n> ----------------------------------------------------------------------------------------------------\r\n> Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual\r\n> time=0.041..4.276 rows=11000 loops=1)\r\n> Filter: (COALESCE(a, 0) = 0)\r\n> Rows Removed by Filter: 1000\r\n> Planning Time: 0.099 ms\r\n> Execution Time: 5.412 ms\r\n> (5 rows)\r\n> \r\n> postgres=# explain analyze select * from xxx where a is null or a = 0;\r\n> QUERY PLAN\r\n> \r\n> -------------------------------------------------------------------------------------------------------\r\n> Seq Scan on xxx (cost=0.00..194.00 rows=10167 width=4) (actual\r\n> time=0.052..5.891 rows=11000 loops=1)\r\n> Filter: ((a IS NULL) OR (a = 0))\r\n> Rows Removed by Filter: 1000\r\n> Planning Time: 0.136 ms\r\n> Execution Time: 7.522 ms\r\n> (5 rows)\r\n> \r\n> I think so pattern coalesce(var, X) = X is very common so can be very\r\n> interesting to support it better.\n\r\nBetter support sounds great!\n\r\nHow specifically might this be better supported? On this relatively\r\nshort table, I see planning times considerably longer, I assume\r\nbecause they need to take a function call into account, and execution\r\ntimes longer but not all that much longer. I tried with 3 million\r\nrows, and got the representative samples below:\n\r\nshackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT * FROM xxx WHERE COALESCE(a, 0)=0;\r\n QUERY PLAN \r\n══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n Gather (cost=1000.00..30391.00 rows=15000 width=4) (actual time=1.315..346.406 rows=999772 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel Seq Scan on xxx (cost=0.00..27891.00 rows=6250 width=4) (actual time=0.029..216.419 rows=333257 loops=3)\r\n Filter: (COALESCE(a, 0) = 0)\r\n Rows Removed by Filter: 666743\r\n Planning Time: 0.204 ms\r\n Execution Time: 389.307 ms\r\n(8 rows)\n\r\nTime: 391.394 ms\n\r\nshackle@[local]:5413/ctest(13devel)(149711) # EXPLAIN ANALYZE SELECT * FROM xxx WHERE a IS NULL OR a = 0;\r\n QUERY PLAN \r\n═════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n Seq Scan on xxx (cost=0.00..49766.00 rows=995700 width=4) (actual time=0.043..524.401 rows=999772 loops=1)\r\n Filter: ((a IS NULL) OR (a = 0))\r\n Rows Removed by Filter: 2000228\r\n Planning Time: 0.106 ms\r\n Execution Time: 560.593 ms\r\n(5 rows)\n\r\nTime: 561.186 msI didn't thing about rewriting. The correct solution should be via own selectivity function. Now for coalesce is used 5% estimation (like for other functions). Probably it should not be hard code because coalesce is a node already. But it is part of code that I never modified.but support functions can be used https://www.cybertec-postgresql.com/en/optimizer-support-functions/postgres=# create table test(id integer);CREATE TABLEpostgres=# insert into test select generate_series(1,100000);INSERT 0 100000postgres=# insert into test select null from generate_series(1,1000);INSERT 0 1000postgres=# analyze test;ANALYZEpostgres=# create index on test(id);CREATE INDEXpostgres=# explain analyze select * from test where coalesce(id, 10) = 10;┌───────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Seq Scan on test (cost=0.00..1708.50 rows=505 width=4) (actual time=0.062..18.370 rows=1001 loops=1) ││ Filter: (COALESCE(id, 10) = 10) ││ Rows Removed by Filter: 99999 ││ Planning Time: 37.212 ms ││ Execution Time: 18.479 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────┘(5 rows)postgres=# explain analyze select * from test where id is null or id = 10;┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Bitmap Heap Scan on test (cost=24.30..482.35 rows=964 width=4) (actual time=0.197..0.334 rows=1001 loops=1) ││ Recheck Cond: ((id IS NULL) OR (id = 10)) ││ Heap Blocks: exact=5 ││ -> BitmapOr (cost=24.30..24.30 rows=964 width=0) (actual time=0.189..0.189 rows=0 loops=1) ││ -> Bitmap Index Scan on test_id_idx (cost=0.00..19.52 rows=963 width=0) (actual time=0.170..0.170 rows=1000 loops=1) ││ Index Cond: (id IS NULL) ││ -> Bitmap Index Scan on test_id_idx (cost=0.00..4.30 rows=1 width=0) (actual time=0.019..0.019 rows=1 loops=1) ││ Index Cond: (id = 10) ││ Planning Time: 0.090 ms ││ Execution Time: 0.413 ms │└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(10 rows)There can be strong benefit from replacement if indexes are used.Pavel\n\r\nBest,\r\nDavid.\r\n-- \r\nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\r\nPhone: +1 415 235 3778\n\r\nRemember to vote!\r\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Thu, 28 Nov 2019 08:45:33 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: missing estimation for coalesce function"
},
{
"msg_contents": "On Wed, 2019-11-27 at 08:47 +0100, Pavel Stehule wrote:\n> The most significant issue was missing correct estimation for coalesce function.\n> He had to rewrite coalesce(var, X) = X to \"var IS NULL or var = X\".\n> Then the result was very satisfactory.\n> \n> postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------\n> Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual time=0.041..4.276 rows=11000 loops=1)\n\nI think that this is asking for a planner support function:\nhttps://www.postgresql.org/docs/current/xfunc-optimization.html\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 28 Nov 2019 15:51:04 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: missing estimation for coalesce function"
},
{
"msg_contents": "čt 28. 11. 2019 v 15:51 odesílatel Laurenz Albe <laurenz.albe@cybertec.at>\nnapsal:\n\n> On Wed, 2019-11-27 at 08:47 +0100, Pavel Stehule wrote:\n> > The most significant issue was missing correct estimation for coalesce\n> function.\n> > He had to rewrite coalesce(var, X) = X to \"var IS NULL or var = X\".\n> > Then the result was very satisfactory.\n> >\n> > postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\n> > QUERY PLAN\n>\n> >\n> ----------------------------------------------------------------------------------------------------\n> > Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual\n> time=0.041..4.276 rows=11000 loops=1)\n>\n> I think that this is asking for a planner support function:\n> https://www.postgresql.org/docs/current/xfunc-optimization.html\n\n\nProbably it needs more work - currently this support is for SRF function or\nfor boolean functions.\n\nOn second hand coalesce is not function - it's expr node. Originally I\nthough so selectivity function can be enough. Now I think so it is not\nenough. It is similar to DISTINCT FROM operator.\n\nSo some plan can look like\n\n1. introduction isnull_or_eq operator\n2. this operator can be used for indexscan too\n3. implement selectivity function for this operator (and maybe for coalesce)\n4. translate COALESCE(var, const) = const --> var isnull_or_eq const\n\nI am not sure if @4 is possible or if some more complex transformations are\npossible COALESCE(var1, var2) = var2\n\nBut what I read about it - MSSQL and Oracle has does this optimization\n\nRegards\n\nPavel\n\n\n>\n> Yours,\n> Laurenz Albe\n>\n>\n\nčt 28. 11. 2019 v 15:51 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:On Wed, 2019-11-27 at 08:47 +0100, Pavel Stehule wrote:\n> The most significant issue was missing correct estimation for coalesce function.\n> He had to rewrite coalesce(var, X) = X to \"var IS NULL or var = X\".\n> Then the result was very satisfactory.\n> \n> postgres=# explain analyze select * from xxx where coalesce(a, 0) = 0;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------\n> Seq Scan on xxx (cost=0.00..194.00 rows=60 width=4) (actual time=0.041..4.276 rows=11000 loops=1)\n\nI think that this is asking for a planner support function:\nhttps://www.postgresql.org/docs/current/xfunc-optimization.htmlProbably it needs more work - currently this support is for SRF function or for boolean functions.On second hand coalesce is not function - it's expr node. Originally I though so selectivity function can be enough. Now I think so it is not enough. It is similar to DISTINCT FROM operator. So some plan can look like1. introduction isnull_or_eq operator2. this operator can be used for indexscan too3. implement selectivity function for this operator (and maybe for coalesce)4. translate COALESCE(var, const) = const --> var isnull_or_eq constI am not sure if @4 is possible or if some more complex transformations are possible COALESCE(var1, var2) = var2But what I read about it - MSSQL and Oracle has does this optimization RegardsPavel\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 28 Nov 2019 19:47:13 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: missing estimation for coalesce function"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI wonder how it is possible to prohibit parallel scan for the external \nstorage accessed through tableam?\nFor example if I want to implement specialized tableam for fast access \nto temp tables, how can I inform optimizer that\nparallel scan is not possible (because table data is local to the backend)?\n\nThanks in advance,\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 27 Nov 2019 14:33:42 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "How to prohibit parallel scan through tableam?"
},
{
"msg_contents": "On Wed, 27 Nov 2019 at 12:33, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n> Hi hackers,\n>\n> I wonder how it is possible to prohibit parallel scan for the external\n> storage accessed through tableam?\n> For example if I want to implement specialized tableam for fast access\n> to temp tables, how can I inform optimizer that\n> parallel scan is not possible (because table data is local to the backend)?\n>\n> How about setting parallel_setup_cost to disable_cost in costsize.c for\nyour specific scan method.\n\n-- \nRegards,\nRafia Sabih\n\nOn Wed, 27 Nov 2019 at 12:33, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:Hi hackers,\n\nI wonder how it is possible to prohibit parallel scan for the external \nstorage accessed through tableam?\nFor example if I want to implement specialized tableam for fast access \nto temp tables, how can I inform optimizer that\nparallel scan is not possible (because table data is local to the backend)? How about setting parallel_setup_cost to disable_cost in costsize.c for your specific scan method.-- Regards,Rafia Sabih",
"msg_date": "Wed, 27 Nov 2019 13:12:35 +0100",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to prohibit parallel scan through tableam?"
},
{
"msg_contents": "On 27.11.2019 15:12, Rafia Sabih wrote:\n>\n>\n> On Wed, 27 Nov 2019 at 12:33, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n> Hi hackers,\n>\n> I wonder how it is possible to prohibit parallel scan for the\n> external\n> storage accessed through tableam?\n> For example if I want to implement specialized tableam for fast\n> access\n> to temp tables, how can I inform optimizer that\n> parallel scan is not possible (because table data is local to the\n> backend)?\n>\n> How about setting parallel_setup_cost to disable_cost in costsize.c \n> for your specific scan method.\n\nHow can I do it if i just implementing my AM and not going to change any \npostgres code?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 27.11.2019 15:12, Rafia Sabih wrote:\n\n\n\n\n\n\n\n\nOn Wed, 27 Nov 2019 at\n 12:33, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\nHi hackers,\n\n I wonder how it is possible to prohibit parallel scan for\n the external \n storage accessed through tableam?\n For example if I want to implement specialized tableam for\n fast access \n to temp tables, how can I inform optimizer that\n parallel scan is not possible (because table data is local\n to the backend)?\n\n\n How about setting parallel_setup_cost to disable_cost in\n costsize.c for your specific scan method.\n\n\n\n\n How can I do it if i just implementing my AM and not going to change\n any postgres code?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 27 Nov 2019 16:10:20 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: How to prohibit parallel scan through tableam?"
},
{
"msg_contents": "On Wed, 27 Nov 2019 at 12:33, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n> Hi hackers,\n>\n> I wonder how it is possible to prohibit parallel scan for the external\n> storage accessed through tableam?\n> For example if I want to implement specialized tableam for fast access\n> to temp tables, how can I inform optimizer that\n> parallel scan is not possible (because table data is local to the backend)?\n>\n> One moment, isn't that parallel scans are already restricted for temp\ntables, or I have misunderstood something here...?\n\n\n\n-- \nRegards,\nRafia Sabih\n\nOn Wed, 27 Nov 2019 at 12:33, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:Hi hackers,\n\nI wonder how it is possible to prohibit parallel scan for the external \nstorage accessed through tableam?\nFor example if I want to implement specialized tableam for fast access \nto temp tables, how can I inform optimizer that\nparallel scan is not possible (because table data is local to the backend)?One moment, isn't that parallel scans are already restricted for temp tables, or I have misunderstood something here...? -- Regards,Rafia Sabih",
"msg_date": "Thu, 28 Nov 2019 14:23:21 +0100",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to prohibit parallel scan through tableam?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-27 14:33:42 +0300, Konstantin Knizhnik wrote:\n> I wonder how it is possible to prohibit parallel scan for the external\n> storage accessed through tableam?\n> For example if I want to implement specialized tableam for fast access to\n> temp tables, how can I inform optimizer that\n> parallel scan is not possible (because table data is local to the backend)?\n\nI don't think there currently is a good way to do so - but it shouldn't\nbe hard to add that capability.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Dec 2019 13:22:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: How to prohibit parallel scan through tableam?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-27 16:10:20 +0300, Konstantin Knizhnik wrote:\n> On 27.11.2019 15:12, Rafia Sabih wrote:\n> > On Wed, 27 Nov 2019 at 12:33, Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n> > \n> > Hi hackers,\n> > \n> > I wonder how it is possible to prohibit parallel scan for the\n> > external\n> > storage accessed through tableam?\n> > For example if I want to implement specialized tableam for fast\n> > access\n> > to temp tables, how can I inform optimizer that\n> > parallel scan is not possible (because table data is local to the\n> > backend)?\n> > \n> > �How about setting parallel_setup_cost to disable_cost in costsize.c for\n> > your specific scan method.\n> \n> How can I do it if i just implementing my AM and not going to change any\n> postgres code?\n\nI think a set_rel_pathlist hook that prevents parallel paths from being\nconsidered would be your best bet for now. But I encourage you to\nsuggest a patch to tableam to support it properly in future releases.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Dec 2019 13:24:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: How to prohibit parallel scan through tableam?"
}
] |
[
{
"msg_contents": "Dear all,\n\nI've met the following problem in our product environment. We tried to reproduce the problem, but because of the low probability of occurrence, we could not reproduce it.\n1. phenomenon\nBackend process crashed when executing 2pc transaction in citus.\n\n- coordinator\n\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver WARNING: server conn crashed?\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver CONTEXT: while executing command on 10.230.27.117:6432\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver WARNING: failed to commit transaction on 10.230.27.117:6432\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver WARNING: failed to roll back prepared transaction 'citus_0_35791_4207001212_1287199'\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver HINT: Run \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\" on 10.230.27.117:6432\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver WARNING: server closed the connection unexpectedly\n\t\t\t\tThis probably means the server terminated abnormally\n\t\t\t\tbefore or while processing the request.\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver CONTEXT: while executing command on 10.230.27.117:6432\n\t\t2019-11-24 11:08:09.914 CST 35791 10.246.66.182(6881) lobausr lobadbw2 PostgreSQL JDBC Driver LOG: duration: 17123.210 ms execute S_1: COMMIT\n\n- worker\n\n\t\t2019-11-24 11:08:09.854 CST 14668 LOG: server process (PID 14714) was terminated by signal 6: Aborted\n\t\t2019-11-24 11:08:09.854 CST 14668 DETAIL: Failed process was running: COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\n\t\t2019-11-24 11:08:09.854 CST 14668 LOG: terminating any other active server processes\n \n2. Occurrence condition\nDistributed transaction in business SQL\n\nPostgreSQL:10.7\ncitus:7.4.1\nOS:RHEL6.3\n\n3. Investigation\n3.1 PG log\n- worker\n\n\t\t*** glibc detected *** postgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED: double free or corruption (!prev): 0x0000000001a977a0 ***\n\t\t======= Backtrace: =========\n\t\t/lib64/libc.so.6[0x369e275f4e]\n\t\t/lib64/libc.so.6[0x369e278cf0]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED(XLogReaderFree+0x57)[0x4ff947]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED[0x4e4387]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED(FinishPreparedTransaction+0x139)[0x4e5849]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED(standard_ProcessUtility+0x711)[0x72d601]\n\t\t/usr/pgsql-10/lib/citus.so(multi_ProcessUtility+0x741)[0x7f63a1ae97e1]\n\t\t/usr/pgsql-10/lib/pg_stat_statements.so(+0x4178)[0x7f63a11e1178]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED[0x729388]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED[0x72a2fd]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED(PortalRun+0x238)[0x72aa98]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED[0x727051]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED(PostgresMain+0x549)[0x728039]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED(PostmasterMain+0x194a)[0x6bb43a]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED(main+0x7c0)[0x63b4d0]\n\t\t/lib64/libc.so.6(__libc_start_main+0xfd)[0x369e21ed5d]\n\t\tpostgres: lobausr lobadbw2 127.0.0.1(39349) COMMIT PREPARED[0x477149]\n\t\t======= Memory map: ========\n\t\t00400000-00a99000 r-xp 00000000 fd:08 266747 /usr/pgsql-10/bin/postgres\n\t\t00c99000-00ca7000 rw-p 00699000 fd:08 266747 /usr/pgsql-10/bin/postgres\n\t\t00ca7000-00d10000 rw-p 00000000 00:00 0 \n\t\t0190c000-019b3000 rw-p 00000000 00:00 0 \n\t\t019b3000-01c5d000 rw-p 00000000 00:00 0 \n\t\t369de00000-369de20000 r-xp 00000000 fd:01 3765 /lib64/ld-2.12.so\n\t\t369e01f000-369e020000 r--p 0001f000 fd:01 3765 /lib64/ld-2.12.so\n\t\t369e020000-369e021000 rw-p 00020000 fd:01 3765 /lib64/ld-2.12.so\n\t\t369e021000-369e022000 rw-p 00000000 00:00 0 \n\t\t369e200000-369e38a000 r-xp 00000000 fd:01 3766 /lib64/libc-2.12.so\n\t\t369e38a000-369e58a000 ---p 0018a000 fd:01 3766 /lib64/libc-2.12.so\n\t\t369e58a000-369e58e000 r--p 0018a000 fd:01 3766 /lib64/libc-2.12.so\n\t\t369e58e000-369e58f000 rw-p 0018e000 fd:01 3766 /lib64/libc-2.12.so\n\t\t369e58f000-369e594000 rw-p 00000000 00:00 0 \n\t\t369e600000-369e602000 r-xp 00000000 fd:01 5167 /lib64/libdl-2.12.so\n\t\t369e602000-369e802000 ---p 00002000 fd:01 5167 /lib64/libdl-2.12.so\n\t\t369e802000-369e803000 r--p 00002000 fd:01 5167 /lib64/libdl-2.12.so\n\t\t369e803000-369e804000 rw-p 00003000 fd:01 5167 /lib64/libdl-2.12.so\n\t\t369ea00000-369ea17000 r-xp 00000000 fd:01 5174 /lib64/libpthread-2.12.so\n\t\t369ea17000-369ec17000 ---p 00017000 fd:01 5174 /lib64/libpthread-2.12.so\n\t\t369ec17000-369ec18000 r--p 00017000 fd:01 5174 /lib64/libpthread-2.12.so\n\t\t369ec18000-369ec19000 rw-p 00018000 fd:01 5174 /lib64/libpthread-2.12.so\n\t\t369ec19000-369ec1d000 rw-p 00000000 00:00 0 \n\t\t369ee00000-369ee07000 r-xp 00000000 fd:01 5175 /lib64/librt-2.12.so\n\t\t369ee07000-369f006000 ---p 00007000 fd:01 5175 /lib64/librt-2.12.so\n\t\t369f006000-369f007000 r--p 00006000 fd:01 5175 /lib64/librt-2.12.so\n\t\t369f007000-369f008000 rw-p 00007000 fd:01 5175 /lib64/librt-2.12.so\n\t\t369f200000-369f283000 r-xp 00000000 fd:01 5182 /lib64/libm-2.12.so\n\t\t369f283000-369f482000 ---p 00083000 fd:01 5182 /lib64/libm-2.12.so\n\t\t369f482000-369f483000 r--p 00082000 fd:01 5182 /lib64/libm-2.12.so\n\t\t369f483000-369f484000 rw-p 00083000 fd:01 5182 /lib64/libm-2.12.so\n\t\t369fa00000-369fa0c000 r-xp 00000000 fd:01 5170 /lib64/libpam.so.0.82.2\n\t\t369fa0c000-369fc0c000 ---p 0000c000 fd:01 5170 /lib64/libpam.so.0.82.2\n\t\t369fc0c000-369fc0d000 r--p 0000c000 fd:01 5170 /lib64/libpam.so.0.82.2\n\t\t369fc0d000-369fc0e000 rw-p 0000d000 fd:01 5170 /lib64/libpam.so.0.82.2\n\t\t369fe00000-369fe1d000 r-xp 00000000 fd:01 5179 /lib64/libselinux.so.1\n\t\t369fe1d000-36a001c000 ---p 0001d000 fd:01 5179 /lib64/libselinux.so.1\n\t\t36a001c000-36a001d000 r--p 0001c000 fd:01 5179 /lib64/libselinux.so.1\n\t\t36a001d000-36a001e000 rw-p 0001d000 fd:01 5179 /lib64/libselinux.so.1\n\t\t36a001e000-36a001f000 rw-p 00000000 00:00 0 \n\t\t36a0200000-36a0219000 r-xp 00000000 fd:08 17924 /usr/lib64/libsasl2.so.2.0.23\n\t\t36a0219000-36a0418000 ---p 00019000 fd:08 17924 /usr/lib64/libsasl2.so.2.0.23\n\t\t36a0418000-36a0419000 r--p 00018000 fd:08 17924 /usr/lib64/libsasl2.so.2.0.23\n\t\t36a0419000-36a041a000 rw-p 00019000 fd:08 17924 /usr/lib64/libsasl2.so.2.0.23\n\t\t36a0600000-36a0616000 r-xp 00000000 fd:01 3772 /lib64/libgcc_s-4.4.6-20120305.so.1\n\t\t36a0616000-36a0815000 ---p 00016000 fd:01 3772 /lib64/libgcc_s-4.4.6-20120305.so.1\n\t\t36a0815000-36a0816000 rw-p 00015000 fd:01 3772 /lib64/libgcc_s-4.4.6-20120305.so.1\n\t\t36a0a00000-36a0a16000 r-xp 00000000 fd:01 5178 /lib64/libresolv-2.12.so\n\t\t36a0a16000-36a0c16000 ---p 00016000 fd:01 5178 /lib64/libresolv-2.12.so\n\t\t36a0c16000-36a0c17000 r--p 00016000 fd:01 5178 /lib64/libresolv-2.12.so\n\t\t36a0c17000-36a0c18000 rw-p 00017000 fd:01 5178 /lib64/libresolv-2.12.so\n\t\t36a0c18000-36a0c1a000 rw-p 00000000 00:00 0 \n\t\t36a0e00000-36a0e49000 r-xp 00000000 fd:01 1926 /lib64/libldap-2.4.so.2.5.6\n\t\t36a0e49000-36a1048000 ---p 00049000 fd:01 1926 /lib64/libldap-2.4.so.2.5.6\n\t\t36a1048000-36a1049000 r--p 00048000 fd:01 1926 /lib64/libldap-2.4.so.2.5.6\n\t\t36a1049000-36a104b000 rw-p 00049000 fd:01 1926 /lib64/libldap-2.4.so.2.5.6\n\t\t36a1600000-36a160e000 r-xp 00000000 fd:01 3777 /lib64/liblber-2.4.so.2.5.6\n\t\t36a160e000-36a180d000 ---p 0000e000 fd:01 3777 /lib64/liblber-2.4.so.2.5.6\n\t\t36a180d000-36a180e000 r--p 0000d000 fd:01 3777 /lib64/liblber-2.4.so.2.5.6\n\t\t36a180e000-36a180f000 rw-p 0000e000 fd:01 3777 /lib64/liblber-2.4.so.2.5.6\n\t\t36a1a00000-36a1a5d000 r-xp 00000000 fd:01 5168 /lib64/libfreebl3.so\n\t\t36a1a5d000-36a1c5c000 ---p 0005d000 fd:01 5168 /lib64/libfreebl3.so\n\t\t36a1c5c000-36a1c5d000 r--p 0005c000 fd:01 5168 /lib64/libfreebl3.so\n\t\t36a1c5d000-36a1c5e000 rw-p 0005d000 fd:01 5168 /lib64/libfreebl3.so\n\t\t36a1c5e000-36a1c62000 rw-p 00000000 00:00 0 \n\t\t36a1e00000-36a1e07000 r-xp 00000000 fd:01 5169 /lib64/libcrypt-2.12.so\n\t\t36a1e07000-36a2007000 ---p 00007000 fd:01 5169 /lib64/libcrypt-2.12.so\n\t\t36a2007000-36a2008000 r--p 00007000 fd:01 5169 /lib64/libcrypt-2.12.so\n\t\t36a2008000-36a2009000 rw-p 00008000 fd:01 5169 /lib64/libcrypt-2.12.so\n\t\t36a2009000-36a2037000 rw-p 00000000 00:00 0 \n\t\t36a2200000-36a2203000 r-xp 00000000 fd:01 5192 /lib64/libcom_err.so.2.1\n\t\t36a2203000-36a2402000 ---p 00003000 fd:01 5192 /lib64/libcom_err.so.2.1\n\t\t36a2402000-36a2403000 r--p 00002000 fd:01 5192 /lib64/libcom_err.so.2.1\n\t\t36a2403000-36a2404000 rw-p 00003000 fd:01 5192 /lib64/libcom_err.so.2.1\n\t\t36a2600000-36a2620000 r-xp 00000000 fd:08 5902 /usr/lib64/libnssutil3.so\n\t\t36a2620000-36a281f000 ---p 00020000 fd:08 5902 /usr/lib64/libnssutil3.so\n\t\t36a281f000-36a2825000 r--p 0001f000 fd:08 5902 /usr/lib64/libnssutil3.so\n\t\t36a2825000-36a2826000 rw-p 00025000 fd:08 5902 /usr/lib64/libnssutil3.so\n\t\t36a2a00000-36a2b47000 r-xp 00000000 fd:08 18039 /usr/lib64/libxml2.so.2.7.6\n\t\t36a2b47000-36a2d46000 ---p 00147000 fd:08 18039 /usr/lib64/libxml2.so.2.7.6\n\t\t36a2d46000-36a2d50000 rw-p 00146000 fd:08 18039 /usr/lib64/libxml2.so.2.7.6\n\t\t36a2d50000-36a2d51000 rw-p 00000000 00:00 0 \n\t\t36a2e00000-36a2e03000 r-xp 00000000 fd:01 5188 /lib64/libplds4.so\n\t\t36a2e03000-36a3002000 ---p 00003000 fd:01 5188 /lib64/libplds4.so\n\t\t36a3002000-36a3003000 r--p 00002000 fd:01 5188 /lib64/libplds4.so\n\t\t36a3003000-36a3004000 rw-p 00003000 fd:01 5188 /lib64/libplds4.so\n\t\t36a3200000-36a3239000 r-xp 00000000 fd:01 663 /lib64/libnspr4.so\n\t\t36a3239000-36a3438000 ---p 00039000 fd:01 663 /lib64/libnspr4.so\n\t\t36a3438000-36a3439000 r--p 00038000 fd:01 663 /lib64/libnspr4.so\n\t\t36a3439000-36a343b000 rw-p 00039000 fd:01 663 /lib64/libnspr4.so\n\t\t36a343b000-36a343d000 rw-p 00000000 00:00 0 \n\t\t36a3600000-36a3638000 r-xp 00000000 fd:08 21660 /usr/lib64/libssl3.so\n\t\t36a3638000-36a3838000 ---p 00038000 fd:08 21660 /usr/lib64/libssl3.so\n\t\t36a3838000-36a383a000 r--p 00038000 fd:08 21660 /usr/lib64/libssl3.so\n\t\t36a383a000-36a383b000 rw-p 0003a000 fd:08 21660 /usr/lib64/libssl3.so\n\t\t36a383b000-36a383c000 rw-p 00000000 00:00 0 \n\t\t36a3a00000-36a3a04000 r-xp 00000000 fd:01 5187 /lib64/libplc4.so\n\t\t36a3a04000-36a3c03000 ---p 00004000 fd:01 5187 /lib64/libplc4.so\n\t\t36a3c03000-36a3c04000 r--p 00003000 fd:01 5187 /lib64/libplc4.so\n\t\t36a3c04000-36a3c05000 rw-p 00004000 fd:01 5187 /lib64/libplc4.so\n\t\t36a3e00000-36a3f33000 r-xp 00000000 fd:08 21659 /usr/lib64/libnss3.so\n\t\t36a3f33000-36a4132000 ---p 00133000 fd:08 21659 /usr/lib64/libnss3.so\n\t\t36a4132000-36a4137000 r--p 00132000 fd:08 21659 /usr/lib64/libnss3.so\n\t\t36a4137000-36a4139000 rw-p 00137000 fd:08 21659 /usr/lib64/libnss3.so\n\t\t36a4139000-36a413b000 rw-p 00000000 00:00 0 \n\t\t36a4200000-36a43ba000 r-xp 00000000 fd:08 20219 /usr/lib64/libcrypto.so.1.0.1e\n\t\t36a43ba000-36a45b9000 ---p 001ba000 fd:08 20219 /usr/lib64/libcrypto.so.1.0.1e\n\t\t36a45b9000-36a45d4000 r--p 001b9000 fd:08 20219 /usr/lib64/libcrypto.so.1.0.1e\n\t\t36a45d4000-36a45e0000 rw-p 001d4000 fd:08 20219 /usr/lib64/libcrypto.so.1.0.1e\n\t\t36a45e0000-36a45e4000 rw-p 00000000 00:00 0 \n\t\t36a4600000-36a4628000 r-xp 00000000 fd:08 20218 /usr/lib64/libsmime3.so\n\t\t36a4628000-36a4828000 ---p 00028000 fd:08 20218 /usr/lib64/libsmime3.so\n\t\t36a4828000-36a482b000 r--p 00028000 fd:08 20218 /usr/lib64/libsmime3.so\n\t\t36a482b000-36a482c000 rw-p 0002b000 fd:08 20218 /usr/lib64/libsmime3.so\n\t\t36a4a00000-36a4a02000 r-xp 00000000 fd:01 237 /lib64/libkeyutils.so.1.3\n\t\t36a4a02000-36a4c01000 ---p 00002000 fd:01 237 /lib64/libkeyutils.so.1.3\n\t\t36a4c01000-36a4c02000 r--p 00001000 fd:01 237 /lib64/libkeyutils.so.1.3\n\t\t36a4c02000-36a4c03000 rw-p 00002000 fd:01 237 /lib64/libkeyutils.so.1.3\n\t\t36a4e00000-36a4e0a000 r-xp 00000000 fd:01 5190 /lib64/libkrb5support.so.0.1\n\t\t36a4e0a000-36a5009000 ---p 0000a000 fd:01 5190 /lib64/libkrb5support.so.0.1\n\t\t36a5009000-36a500a000 r--p 00009000 fd:01 5190 /lib64/libkrb5support.so.0.1\n\t\t36a500a000-36a500b000 rw-p 0000a000 fd:01 5190 /lib64/libkrb5support.so.0.1\n\t\t36a5200000-36a52d4000 r-xp 00000000 fd:01 5193 /lib64/libkrb5.so.3.3\n\t\t36a52d4000-36a54d4000 ---p 000d4000 fd:01 5193 /lib64/libkrb5.so.3.3\n\t\t36a54d4000-36a54dd000 r--p 000d4000 fd:01 5193 /lib64/libkrb5.so.3.3\n\t\t36a54dd000-36a54df000 rw-p 000dd000 fd:01 5193 /lib64/libkrb5.so.3.3\n\t\t36a5600000-36a563f000 r-xp 00000000 fd:01 5194 /lib64/libgssapi_krb5.so.2.2\n\t\t36a563f000-36a583f000 ---p 0003f000 fd:01 5194 /lib64/libgssapi_krb5.so.2.2\n\t\t36a583f000-36a5840000 r--p 0003f000 fd:01 5194 /lib64/libgssapi_krb5.so.2.2\n\t\t36a5840000-36a5842000 rw-p 00040000 fd:01 5194 /lib64/libgssapi_krb5.so.2.2\n\t\t36a5a00000-36a5a51000 r-xp 00000000 fd:08 6885 /usr/lib64/libcurl.so.4.1.1\n\t\t36a5a51000-36a5c50000 ---p 00051000 fd:08 6885 /usr/lib64/libcurl.so.4.1.1\n\t\t36a5c50000-36a5c53000 rw-p 00050000 fd:08 6885 /usr/lib64/libcurl.so.4.1.1\n\t\t36a5e00000-36a5e2a000 r-xp 00000000 fd:01 5191 /lib64/libk5crypto.so.3.1\n\t\t36a5e2a000-36a6029000 ---p 0002a000 fd:01 5191 /lib64/libk5crypto.so.3.1\n\t\t36a6029000-36a602b000 r--p 00029000 fd:01 5191 /lib64/libk5crypto.so.3.1\n\t\t36a602b000-36a602c000 rw-p 0002b000 fd:01 5191 /lib64/libk5crypto.so.3.1\n\t\t36a6200000-36a6226000 r-xp 00000000 fd:08 16520 /usr/lib64/libssh2.so.1.0.1\n\t\t36a6226000-36a6426000 ---p 00026000 fd:08 16520 /usr/lib64/libssh2.so.1.0.1\n\t\t36a6426000-36a6427000 rw-p 00026000 fd:08 16520 /usr/lib64/libssh2.so.1.0.1\n\t\t36a7200000-36a7262000 r-xp 00000000 fd:08 17699 /usr/lib64/libssl.so.1.0.1e\n\t\t36a7262000-36a7461000 ---p 00062000 fd:08 17699 /usr/lib64/libssl.so.1.0.1e\n\t\t36a7461000-36a7465000 r--p 00061000 fd:08 17699 /usr/lib64/libssl.so.1.0.1e\n\t\t36a7465000-36a746c000 rw-p 00065000 fd:08 17699 /usr/lib64/libssl.so.1.0.1e\n\t\t7f61c8000000-7f61c8021000 rw-p 00000000 00:00 0 \n\t\t7f61c8021000-7f61cc000000 ---p 00000000 00:00 0 \n\t\t7f61ccdba000-7f61ccdc0000 r-xp 00000000 fd:08 266931 /usr/pgsql-10/lib/btree_gin.so\n\t\t7f61ccdc0000-7f61ccfbf000 ---p 00006000 fd:08 266931 /usr/pgsql-10/lib/btree_gin.so\n\t\t7f61ccfbf000-7f61ccfc0000 rw-p 00005000 fd:08 266931 /usr/pgsql-10/lib/btree_gin.so\n\t\t7f61ccfc0000-7f61cd1df000 rw-p 00000000 00:00 0 \n\t\t7f61cd1df000-7f61cd1eb000 r-xp 00000000 fd:01 3754 /lib64/libnss_files-2.12.so\n\t\t7f61cd1eb000-7f61cd3eb000 ---p 0000c000 fd:01 3754 /lib64/libnss_files-2.12.so\n\t\t7f61cd3eb000-7f61cd3ec000 r--p 0000c000 fd:01 3754 /lib64/libnss_files-2.12.so\n\t\t7f61cd3ec000-7f61cd3ed000 rw-p 0000d000 fd:01 3754 /lib64/libnss_files-2.12.so\n\t\t7f61cd3f7000-7f61cd406000 rw-s 00000000 00:10 1491041 /dev/shm/PostgreSQL.1869824892\n\t\t7f61cd406000-7f63a0bd6000 rw-s 00000000 00:04 1491038 /dev/zero (deleted)\n\t\t7f63a0bd6000-7f63a0bd9000 r-xp 00000000 fd:08 268834 /usr/pgsql-10/lib/timescaledb.so\n\t\t7f63a0bd9000-7f63a0dd8000 ---p 00003000 fd:08 268834 /usr/pgsql-10/lib/timescaledb.so\n\t\t7f63a0dd8000-7f63a0dd9000 rw-p 00002000 fd:08 268834 /usr/pgsql-10/lib/timescaledb.so\n\t\t7f63a0dd9000-7f63a0dda000 r-xp 00000000 fd:08 266927 /usr/pgsql-10/lib/auth_delay.so\n\t\t7f63a0dda000-7f63a0fd9000 ---p 00001000 fd:08 266927 /usr/pgsql-10/lib/auth_delay.so\n\t\t7f63a0fd9000-7f63a0fda000 rw-p 00000000 fd:08 266927 /usr/pgsql-10/lib/auth_delay.so\n\t\t7f63a0fda000-7f63a0fdc000 r-xp 00000000 fd:08 266928 /usr/pgsql-10/lib/auto_explain.so\n\t\t7f63a0fdc000-7f63a11dc000 ---p 00002000 fd:08 266928 /usr/pgsql-10/lib/auto_explain.so\n\t\t7f63a11dc000-7f63a11dd000 rw-p 00002000 fd:08 266928 /usr/pgsql-10/lib/auto_explain.so\n\t\t7f63a11dd000-7f63a11e5000 r-xp 00000000 fd:08 266954 /usr/pgsql-10/lib/pg_stat_statements.so\n\t\t7f63a11e5000-7f63a13e4000 ---p 00008000 fd:08 266954 /usr/pgsql-10/lib/pg_stat_statements.so\n\t\t7f63a13e4000-7f63a13e5000 rw-p 00007000 fd:08 266954 /usr/pgsql-10/lib/pg_stat_statements.so\n\t\t7f63a13e5000-7f63a1433000 r-xp 00000000 fd:01 644 /lib64/libldap_r-2.4.so.2.5.6\n\t\t7f63a1433000-7f63a1633000 ---p 0004e000 fd:01 644 /lib64/libldap_r-2.4.so.2.5.6\n\t\t7f63a1633000-7f63a1634000 r--p 0004e000 fd:01 644 /lib64/libldap_r-2.4.so.2.5.6\n\t\t7f63a1634000-7f63a1636000 rw-p 0004f000 fd:01 644 /lib64/libldap_r-2.4.so.2.5.6\n\t\t7f63a1636000-7f63a1638000 rw-p 00000000 00:00 0 \n\t\t7f63a1638000-7f63a166a000 r-xp 00000000 fd:01 5189 /lib64/libidn.so.11.6.1\n\t\t7f63a166a000-7f63a1869000 ---p 00032000 fd:01 5189 /lib64/libidn.so.11.6.1\n\t\t7f63a1869000-7f63a186a000 rw-p 00031000 fd:01 5189 /lib64/libidn.so.11.6.1\n\t\t7f63a186a000-7f63a18af000 r-xp 00000000 fd:08 266281 /usr/pgsql-10/lib/libpq.so.5.10\n\t\t7f63a18af000-7f63a1aaf000 ---p 00045000 fd:08 266281 /usr/pgsql-10/lib/libpq.so.5.10\n\t\t7f63a1aaf000-7f63a1ab2000 rw-p 00045000 fd:08 266281 /usr/pgsql-10/lib/libpq.so.5.10\n\t\t7f63a1ab2000-7f63a1b5a000 r-xp 00000000 fd:08 268586 /usr/pgsql-10/lib/citus.so\n\t\t7f63a1b5a000-7f63a1d5a000 ---p 000a8000 fd:08 268586 /usr/pgsql-10/lib/citus.so\n\t\t7f63a1d5a000-7f63a1d5e000 rw-p 000a8000 fd:08 268586 /usr/pgsql-10/lib/citus.so\n\t\t7f63a1d5e000-7f63a1d64000 rw-p 00000000 00:00 0 \n\t\t7f63a1d64000-7f63a7bf5000 r--p 00000000 fd:08 21655 /usr/lib/locale/locale-archive\n\t\t7f63a7bf5000-7f63a7bfd000 rw-p 00000000 00:00 0 \n\t\t7f63a7bfd000-7f63a7ce5000 r-xp 00000000 fd:08 1530 /usr/lib64/libstdc++.so.6.0.13\n\t\t7f63a7ce5000-7f63a7ee5000 ---p 000e8000 fd:08 1530 /usr/lib64/libstdc++.so.6.0.13\n\t\t7f63a7ee5000-7f63a7eec000 r--p 000e8000 fd:08 1530 /usr/lib64/libstdc++.so.6.0.13\n\t\t7f63a7eec000-7f63a7eee000 rw-p 000ef000 fd:08 1530 /usr/lib64/libstdc++.so.6.0.13\n\t\t7f63a7eee000-7f63a7f03000 rw-p 00000000 00:00 0 \n\t\t7f63a7f03000-7f63a8e48000 r-xp 00000000 fd:08 22589 /usr/lib64/libicudata.so.42.1\n\t\t7f63a8e48000-7f63a9047000 ---p 00f45000 fd:08 22589 /usr/lib64/libicudata.so.42.1\n\t\t7f63a9047000-7f63a9048000 rw-p 00f44000 fd:08 22589 /usr/lib64/libicudata.so.42.1\n\t\t7f63a9048000-7f63a904e000 rw-p 00000000 00:00 0 \n\t\t7f63a904e000-7f63a9065000 r-xp 00000000 fd:01 5166 /lib64/libaudit.so.1.0.0\n\t\t7f63a9065000-7f63a9264000 ---p 00017000 fd:01 5166 /lib64/libaudit.so.1.0.0\n\t\t7f63a9264000-7f63a9265000 r--p 00016000 fd:01 5166 /lib64/libaudit.so.1.0.0\n\t\t7f63a9265000-7f63a926a000 rw-p 00017000 fd:01 5166 /lib64/libaudit.so.1.0.0\n\t\t7f63a926a000-7f63a927f000 r-xp 00000000 fd:01 5181 /lib64/libz.so.1.2.3\n\t\t7f63a927f000-7f63a947e000 ---p 00015000 fd:01 5181 /lib64/libz.so.1.2.3\n\t\t7f63a947e000-7f63a947f000 r--p 00014000 fd:01 5181 /lib64/libz.so.1.2.3\n\t\t7f63a947f000-7f63a9480000 rw-p 00015000 fd:01 5181 /lib64/libz.so.1.2.3\n\t\t7f63a9480000-7f63a9481000 rw-p 00000000 00:00 0 \n\t\t7f63a9481000-7f63a95c0000 r-xp 00000000 fd:08 22601 /usr/lib64/libicuuc.so.42.1\n\t\t7f63a95c0000-7f63a97c0000 ---p 0013f000 fd:08 22601 /usr/lib64/libicuuc.so.42.1\n\t\t7f63a97c0000-7f63a97d1000 rw-p 0013f000 fd:08 22601 /usr/lib64/libicuuc.so.42.1\n\t\t7f63a97d1000-7f63a97d3000 rw-p 00000000 00:00 0 \n\t\t7f63a97d3000-7f63a995b000 r-xp 00000000 fd:08 22591 /usr/lib64/libicui18n.so.42.1\n\t\t7f63a995b000-7f63a9b5b000 ---p 00188000 fd:08 22591 /usr/lib64/libicui18n.so.42.1\n\t\t7f63a9b5b000-7f63a9b68000 rw-p 00188000 fd:08 22591 /usr/lib64/libicui18n.so.42.1\n\t\t7f63a9b68000-7f63a9b6d000 rw-p 00000000 00:00 0 \n\t\t7f63a9b75000-7f63a9b76000 rw-p 00000000 00:00 0 \n\t\t7f63a9b76000-7f63a9b77000 rw-s 00000000 00:04 262145 /SYSV0052e2c1 (deleted)\n\t\t7f63a9b77000-7f63a9b78000 rw-p 00000000 00:00 0 \n\t\t7fff4a3fc000-7fff4a411000 rw-p 00000000 00:00 0 [stack]\n\t\t7fff4a4c3000-7fff4a4c4000 r-xp 00000000 00:00 0 [vdso]\n\t\tffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]\n\t\t2019-11-24 11:07:53.735 CST 14713 127.0.0.1(39348) lobausr lobadbw2 citus - 10.230.239.36:49809 LOG: process 14713 still waiting for ShareLock on transaction 971951123 after 1000.135 ms\n\t\t2019-11-24 11:07:53.735 CST 14713 127.0.0.1(39348) lobausr lobadbw2 citus - 10.230.239.36:49809 DETAIL: Process holding the lock: 0. Wait queue: 14713.\n\t\t2019-11-24 11:07:53.735 CST 14713 127.0.0.1(39348) lobausr lobadbw2 citus - 10.230.239.36:49809 CONTEXT: while deleting tuple (88025,10) in relation \"loba_tt_ldcs_zcd_undo_103061\"\n\t\t2019-11-24 11:07:53.735 CST 14713 127.0.0.1(39348) lobausr lobadbw2 citus - 10.230.239.36:49809 STATEMENT: DELETE FROM lobauser.loba_tt_ldcs_zcd_undo_103061 loba_tt_ldcs_zcd_undo WHERE (((zcbsf)::text OPERATOR(pg_catalog.=) $1) AND ((exidv)::text OPERATOR(pg_catalog.=) $2))\n\t\t2019-11-24 11:07:53.751 CST 5930 127.0.0.1(38471) lobausr lobadbw2 citus - 10.230.239.36:47339 LOG: process 5930 still waiting for AccessExclusiveLock on tuple (88025,10) of relation 129161 of database 16397 after 1000.107 ms\n\t\t2019-11-24 11:07:53.751 CST 5930 127.0.0.1(38471) lobausr lobadbw2 citus - 10.230.239.36:47339 DETAIL: Process holding the lock: 14713. Wait queue: 5930, 14710, 13352.\n\t\t2019-11-24 11:07:53.751 CST 5930 127.0.0.1(38471) lobausr lobadbw2 citus - 10.230.239.36:47339 STATEMENT: DELETE FROM lobauser.loba_tt_ldcs_zcd_undo_103061 loba_tt_ldcs_zcd_undo WHERE (((zcbsf)::text OPERATOR(pg_catalog.=) $1) AND ((exidv)::text OPERATOR(pg_catalog.=) $2))\n\t\t2019-11-24 11:07:53.767 CST 14710 127.0.0.1(39345) lobausr lobadbw2 citus - 10.230.239.36:49771 LOG: process 14710 still waiting for AccessExclusiveLock on tuple (88025,10) of relation 129161 of database 16397 after 1000.065 ms\n\t\t2019-11-24 11:07:53.767 CST 14710 127.0.0.1(39345) lobausr lobadbw2 citus - 10.230.239.36:49771 DETAIL: Process holding the lock: 14713. Wait queue: 5930, 14710, 13352.\n\t\t2019-11-24 11:07:53.767 CST 14710 127.0.0.1(39345) lobausr lobadbw2 citus - 10.230.239.36:49771 STATEMENT: DELETE FROM lobauser.loba_tt_ldcs_zcd_undo_103061 loba_tt_ldcs_zcd_undo WHERE (((zcbsf)::text OPERATOR(pg_catalog.=) $1) AND ((exidv)::text OPERATOR(pg_catalog.=) $2))\n\t\t2019-11-24 11:07:53.776 CST 13352 127.0.0.1(39181) lobausr lobadbw2 citus - 10.230.239.36:19373 LOG: process 13352 still waiting for AccessExclusiveLock on tuple (88025,10) of relation 129161 of database 16397 after 1000.071 ms\n\t\t2019-11-24 11:07:53.776 CST 13352 127.0.0.1(39181) lobausr lobadbw2 citus - 10.230.239.36:19373 DETAIL: Process holding the lock: 14713. Wait queue: 5930, 14710, 13352.\n\t\t2019-11-24 11:07:53.776 CST 13352 127.0.0.1(39181) lobausr lobadbw2 citus - 10.230.239.36:19373 STATEMENT: DELETE FROM lobauser.loba_tt_ldcs_zcd_undo_103061 loba_tt_ldcs_zcd_undo WHERE (((zcbsf)::text OPERATOR(pg_catalog.=) $1) AND ((exidv)::text OPERATOR(pg_catalog.=) $2))\n\t\t2019-11-24 11:08:09.854 CST 14668 LOG: server process (PID 14714) was terminated by signal 6: Aborted\n\t\t2019-11-24 11:08:09.854 CST 14668 DETAIL: Failed process was running: COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\n\t\t2019-11-24 11:08:09.854 CST 14668 LOG: terminating any other active server processes\n\n3.2 stacktrace\n\n\t(gdb) bt\n\t#0 0x000000369e232625 in raise () from /lib64/libc.so.6\n\t#1 0x000000369e233e05 in abort () from /lib64/libc.so.6\n\t#2 0x000000369e270537 in __libc_message () from /lib64/libc.so.6\n\t#3 0x000000369e275f4e in malloc_printerr () from /lib64/libc.so.6\n\t#4 0x000000369e278cf0 in _int_free () from /lib64/libc.so.6\n\t#5 0x00000000004ff947 in XLogReaderFree (state=0x1a403a8) at xlogreader.c:141\n\t#6 0x00000000004e4387 in XlogReadTwoPhaseData (lsn=32886947137584, buf=0x7fff4a40ec38, len=0x0) at twophase.c:1341\n\t#7 0x00000000004e5849 in FinishPreparedTransaction (gid=0x19d7830 \"citus_0_35791_4207001212_1287199\", isCommit=1 '\\001') at twophase.c:1411\n\t#8 0x000000000072d601 in standard_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, \n\t queryEnv=0x0, dest=0x19d7c80, completionTag=0x7fff4a40f260 \"\") at utility.c:460\n\t#9 0x00007f63a1ae97e1 in multi_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, \n\t queryEnv=0x0, dest=0x19d7c80, completionTag=0x7fff4a40f260 \"\") at executor/multi_utility.c:254\n\t#10 0x00007f63a11e1178 in pgss_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, \n\t queryEnv=0x0, dest=0x19d7c80, completionTag=0x7fff4a40f260 \"\") at pg_stat_statements.c:998\n\t#11 0x0000000000729388 in PortalRunUtility (portal=0x1a27128, pstmt=0x19d7ba0, isTopLevel=<value optimized out>, setHoldSnapshot=<value optimized out>, dest=0x19d7c80, \n\t completionTag=<value optimized out>) at pquery.c:1178\n\t#12 0x000000000072a2fd in PortalRunMulti (portal=0x1a27128, isTopLevel=1 '\\001', setHoldSnapshot=0 '\\000', dest=0x19d7c80, altdest=0x19d7c80, completionTag=0x7fff4a40f260 \"\")\n\t at pquery.c:1331\n\t#13 0x000000000072aa98 in PortalRun (portal=0x1a27128, count=9223372036854775807, isTopLevel=1 '\\001', run_once=1 '\\001', dest=0x19d7c80, altdest=0x19d7c80, completionTag=0x7fff4a40f260 \"\")\n\t at pquery.c:799\n\t#14 0x0000000000727051 in exec_simple_query (query_string=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\") at postgres.c:1122\n\t#15 0x0000000000728039 in PostgresMain (argc=<value optimized out>, argv=<value optimized out>, dbname=0x1952b48 \"lobadbw2\", username=<value optimized out>) at postgres.c:4117\n\t#16 0x00000000006bb43a in BackendRun (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4405\n\t#17 BackendStartup (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4077\n\t#18 ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1755\n\t#19 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1363\n\t#20 0x000000000063b4d0 in main (argc=3, argv=0x190c520) at main.c:228\n\t(gdb) f 5\n\t#5 0x00000000004ff947 in XLogReaderFree (state=0x1a403a8) at xlogreader.c:141\n\t141\txlogreader.c: No such file or directory.\n\t\tin xlogreader.c\n\t(gdb) p state->readRecordBuf\n\t$3 = 0x1a977d8 \"M\\001\"\n\t(gdb) p state->readRecordBufSize\n\t$7 = 40960\n\t(gdb) \n \n\n\n--\nLIANGBO@suning.com\n\n\n\n",
"msg_date": "Wed, 27 Nov 2019 19:52:48 +0800",
"msg_from": "\"LIANGBO\" <liangboa@suning.com>",
"msg_from_op": true,
"msg_subject": "[Incident report]Backend process crashed when executing 2pc\n transaction"
},
{
"msg_contents": "Hello,\n\nOn Wed, Nov 27, 2019 at 8:59 PM LIANGBO <liangboa@suning.com> wrote:\n> I've met the following problem in our product environment. We tried to reproduce the problem, but because of the low probability of occurrence, we could not reproduce it.\n> 1. phenomenon\n> Backend process crashed when executing 2pc transaction in citus.\n>\n> 2. Occurrence condition\n> Distributed transaction in business SQL\n>\n> PostgreSQL:10.7\n> citus:7.4.1\n> OS:RHEL6.3\n>\n> 3.2 stacktrace\n>\n> (gdb) bt\n> #0 0x000000369e232625 in raise () from /lib64/libc.so.6\n> #1 0x000000369e233e05 in abort () from /lib64/libc.so.6\n> #2 0x000000369e270537 in __libc_message () from /lib64/libc.so.6\n> #3 0x000000369e275f4e in malloc_printerr () from /lib64/libc.so.6\n> #4 0x000000369e278cf0 in _int_free () from /lib64/libc.so.6\n> #5 0x00000000004ff947 in XLogReaderFree (state=0x1a403a8) at xlogreader.c:141\n> #6 0x00000000004e4387 in XlogReadTwoPhaseData (lsn=32886947137584, buf=0x7fff4a40ec38, len=0x0) at twophase.c:1341\n> #7 0x00000000004e5849 in FinishPreparedTransaction (gid=0x19d7830 \"citus_0_35791_4207001212_1287199\", isCommit=1 '\\001') at twophase.c:1411\n> #8 0x000000000072d601 in standard_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x19d7c80, completionTag=0x7fff4a40f260 \"\") at utility.c:460\n> #9 0x00007f63a1ae97e1 in multi_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x19d7c80, completionTag=0x7fff4a40f260 \"\") at executor/multi_utility.c:254\n> #10 0x00007f63a11e1178 in pgss_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n\nHave you considered *also* reporting this to Citus developers, because\nwhile the crash seems to have occurred in the core PostgreSQL code\nthey may have a better chance reproducing this if at all.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 28 Nov 2019 13:24:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Incident report]Backend process crashed when executing 2pc\n transaction"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 01:24:00PM +0900, Amit Langote wrote:\n> Have you considered *also* reporting this to Citus developers, because\n> while the crash seems to have occurred in the core PostgreSQL code\n> they may have a better chance reproducing this if at all.\n\nHard to fully conclude with the information at hand. Still, if you\nlook at the backtrace, it complains about readRecordBuf being already\nfree'd, which is something that happens only if it is not NULL and\nonly when freeing the reader. The thing is that this area is used\nonly as a temporary buffer for a record being read, which may\noptionally get extended. Please note as well that the stack trace\nmentions multi_ProcessUtility(), which is not Postgres code. So my\ngut actually tells me that this is a Citus-only bug, and that there is\nan issue with some memory context cleanup in a xact callback or such.\nJust a guess, but this could explain why the memory area of\nreadRecordBuf just went magically away.\n\nIf you can produce a test case with just Postgres, that's another\nstory of course, and if it were a bug in Postgres, I would imagine\nthat a simple pgbench test running a lot of 2PC transactions in\nparallel may be able to reproduce it after some time.\n--\nMichael",
"msg_date": "Thu, 28 Nov 2019 13:47:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Incident report]Backend process crashed when executing 2pc\n transaction"
},
{
"msg_contents": "Hello:\n\n> Have you considered *also* reporting this to Citus developers, because while the crash seems to have occurred in the core PostgreSQL code they may have a better chance reproducing this if at all.\n\nI've sent this issue to the citus community, and then received the reply with \"Just a note that this appears to be a bug in Postgres 2PC code.\".\nhttps://github.com/citusdata/citus/issues/3228\n\n\n\n-----邮件原件-----\n发件人: Amit Langote [mailto:amitlangote09@gmail.com] \n发送时间: 2019年11月28日 12:24\n收件人: LIANGBO\n抄送: PostgreSQL Hackers\n主题: Re: [Incident report]Backend process crashed when executing 2pc transaction\n\nHello,\n\nOn Wed, Nov 27, 2019 at 8:59 PM LIANGBO <liangboa@suning.com> wrote:\n> I've met the following problem in our product environment. We tried to reproduce the problem, but because of the low probability of occurrence, we could not reproduce it.\n> 1. phenomenon\n> Backend process crashed when executing 2pc transaction in citus.\n>\n> 2. Occurrence condition\n> Distributed transaction in business SQL\n>\n> PostgreSQL:10.7\n> citus:7.4.1\n> OS:RHEL6.3\n>\n> 3.2 stacktrace\n>\n> (gdb) bt\n> #0 0x000000369e232625 in raise () from /lib64/libc.so.6\n> #1 0x000000369e233e05 in abort () from /lib64/libc.so.6\n> #2 0x000000369e270537 in __libc_message () from /lib64/libc.so.6\n> #3 0x000000369e275f4e in malloc_printerr () from /lib64/libc.so.6\n> #4 0x000000369e278cf0 in _int_free () from /lib64/libc.so.6\n> #5 0x00000000004ff947 in XLogReaderFree (state=0x1a403a8) at xlogreader.c:141\n> #6 0x00000000004e4387 in XlogReadTwoPhaseData (lsn=32886947137584, buf=0x7fff4a40ec38, len=0x0) at twophase.c:1341\n> #7 0x00000000004e5849 in FinishPreparedTransaction (gid=0x19d7830 \"citus_0_35791_4207001212_1287199\", isCommit=1 '\\001') at twophase.c:1411\n> #8 0x000000000072d601 in standard_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x19d7c80, completionTag=0x7fff4a40f260 \"\") at utility.c:460\n> #9 0x00007f63a1ae97e1 in multi_ProcessUtility (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x19d7c80, completionTag=0x7fff4a40f260 \"\") at executor/multi_utility.c:254\n> #10 0x00007f63a11e1178 in pgss_ProcessUtility \n> (pstmt=0x19d7ba0, queryString=0x19d6e48 \"COMMIT PREPARED \n> 'citus_0_35791_4207001212_1287199'\", context=PROCESS_UTILITY_TOPLEVEL, \n> params=0x0,\n\nHave you considered *also* reporting this to Citus developers, because while the crash seems to have occurred in the core PostgreSQL code they may have a better chance reproducing this if at all.\n\nThanks,\nAmit\n\n\n\n\n\n",
"msg_date": "Thu, 28 Nov 2019 13:00:47 +0800",
"msg_from": "\"LIANGBO\" <liangboa@suning.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?=E7=AD=94=E5=A4=8D:_=5BIncident_report=5DBackend_pro?=\n\t=?utf-8?Q?cess_crashed_when_executing_2pc_?=\n\t=?utf-8?Q?transaction?="
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 2:00 PM LIANGBO <liangboa@suning.com> wrote:\n>\n> Hello:\n>\n> > Have you considered *also* reporting this to Citus developers, because while the crash seems to have occurred in the core PostgreSQL code they may have a better chance reproducing this if at all.\n>\n> I've sent this issue to the citus community, and then received the reply with \"Just a note that this appears to be a bug in Postgres 2PC code.\".\n> https://github.com/citusdata/citus/issues/3228\n\nInteresting. Still, I think you'd be in better position than anyone\nelse to come up with reproduction steps for vanilla PostgreSQL by\nanalyzing the stack trace if and when the crash next occurs (or using\nthe existing core dump). It's hard to tell by only guessing what may\nhave gone wrong when there is external code involved, especially\nsomething like Citus that hooks into many points within vanilla\nPostgreSQL.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 28 Nov 2019 14:18:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Incident report]Backend process crashed when executing 2pc\n transaction"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 6:18 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Interesting. Still, I think you'd be in better position than anyone\n> else to come up with reproduction steps for vanilla PostgreSQL by\n> analyzing the stack trace if and when the crash next occurs (or using\n> the existing core dump). It's hard to tell by only guessing what may\n> have gone wrong when there is external code involved, especially\n> something like Citus that hooks into many points within vanilla\n> PostgreSQL.\n\nTo clarify: In a Citus cluster you typically have a coordinator which\ncontains the \"distributed tables\" and one or more workers which\ncontain the data. All are PostgreSQL servers with the citus extension.\nThe coordinator uses every available hook in PostgreSQL to make the\ndistributed tables behave like regular tables. Any crash on the\ncoordinator is likely to be attributable to Citus, because most of the\ncode that is exercised is Citus code. The workers are used as regular\nPostgreSQL servers with the coordinator acting as a regular client. On\nthe worker, the ProcessUtility hook will just pass on the arguments to\nstandard_ProcessUtility without any processing. The crash happened on\na worker.\n\nOne interesting thing is the prepared transaction name generated by\nthe coordinator, which follows the form: citus_<coordinator node\nid>_<pid>_<server-wide transaction number >_<prepared transaction\nnumber in session>. The server-wide transaction number is a 64-bit\ncounter that is kept in shared memory and starts at 1. That means that\nover 4 billion (4207001212) transactions happened on the coordinator\nsince the server started, which quite possibly resulted in 4 billion\nprepared transactions on this particular server. I'm wondering if some\ncounter is overflowing.\n\ncheers,\nMarco\n\n\n",
"msg_date": "Thu, 28 Nov 2019 09:01:55 +0100",
"msg_from": "Marco Slot <marco@citusdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [Incident report]Backend process crashed when executing 2pc\n transaction"
},
{
"msg_contents": "Hi Marco,\n\nOn Thu, Nov 28, 2019 at 5:02 PM Marco Slot <marco@citusdata.com> wrote:\n>\n> On Thu, Nov 28, 2019 at 6:18 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Interesting. Still, I think you'd be in better position than anyone\n> > else to come up with reproduction steps for vanilla PostgreSQL by\n> > analyzing the stack trace if and when the crash next occurs (or using\n> > the existing core dump). It's hard to tell by only guessing what may\n> > have gone wrong when there is external code involved, especially\n> > something like Citus that hooks into many points within vanilla\n> > PostgreSQL.\n>\n> To clarify: In a Citus cluster you typically have a coordinator which\n> contains the \"distributed tables\" and one or more workers which\n> contain the data. All are PostgreSQL servers with the citus extension.\n> The coordinator uses every available hook in PostgreSQL to make the\n> distributed tables behave like regular tables. Any crash on the\n> coordinator is likely to be attributable to Citus, because most of the\n> code that is exercised is Citus code. The workers are used as regular\n> PostgreSQL servers with the coordinator acting as a regular client. On\n> the worker, the ProcessUtility hook will just pass on the arguments to\n> standard_ProcessUtility without any processing. The crash happened on\n> a worker.\n\nThanks for clarifying.\n\n> One interesting thing is the prepared transaction name generated by\n> the coordinator, which follows the form: citus_<coordinator node\n> id>_<pid>_<server-wide transaction number >_<prepared transaction\n> number in session>. The server-wide transaction number is a 64-bit\n> counter that is kept in shared memory and starts at 1. That means that\n> over 4 billion (4207001212) transactions happened on the coordinator\n> since the server started, which quite possibly resulted in 4 billion\n> prepared transactions on this particular server. I'm wondering if some\n> counter is overflowing.\n\nInteresting. This does kind of gets us closer to figuring out what\nmight have gone wrong, but hard to tell without the core dump at hand.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 28 Nov 2019 18:01:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Incident report]Backend process crashed when executing 2pc\n transaction"
},
{
"msg_contents": "Marco wrote:\n> One interesting thing is the prepared transaction name generated by\n> the coordinator, which follows the form: citus_<coordinator node\n> id>_<pid>_<server-wide transaction number >_<prepared transaction\n> number in session>. The server-wide transaction number is a 64-bit\n> counter that is kept in shared memory and starts at 1. That means that\n> over 4 billion (4207001212) transactions happened on the coordinator\n> since the server started, which quite possibly resulted in 4 billion\n> prepared transactions on this particular server. I'm wondering if some\n> counter is overflowing.\n\nAmit wrote:\n>Interesting. This does kind of gets us closer to figuring out what\n>might have gone wrong, but hard to tell without the core dump at hand.\n\nIf something is corrupting memory rarely. It would be interesting to consider all the possibilities.\nThe MemSet function has an error alert on line 785 (twophase.c).\nThe size the var \"_vstart\" buffer, is not multiple size of the type long.\nMaybe it's filling more than it should.\n\nRanier Vilela\n\n\n",
"msg_date": "Thu, 28 Nov 2019 17:37:55 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: [Incident report]Backend process crashed when executing 2pc\n transaction"
}
] |
[
{
"msg_contents": "I got interested in $SUBJECT as a result of the complaint at [1]\nabout typmods not being checked/enforced in places where they\nreasonably should be. The cause is that executor/functions.c's\ncheck_sql_fn_retval() only worries about types not typmods.\nAnother thing not to like is that it only supports cases where\nthe target type is binary-compatible with the source. It's not\ngreat that user-visible semantics depend on an implementation\ndetail like binary compatibility. (Amusingly, our user docs\nignore this altogether and claim that the types must be identical.)\n\nHence, the attached patch rearranges things so that we'll allow\nany case where the parser's standard coercion logic can find an\nassignment-level coercion, including typmod coercion if needed.\nIn a green field I might've argued for restricting this to\nimplicit coercions; but since some of the standard binary-compatible\ncasts are assignment-level, that would risk breaking applications\nthat work today. It's really safe enough though, just as assignment\ncoercions are fine in INSERT: there's no possible confusion about\nwhich conversion is appropriate.\n\nThis required some adjustments of check_sql_fn_retval's API.\nI found that pulling out the determination of the result tupdesc\nand making the callers do that was advisable: in most cases, the\ncaller has more information and can produce a more accurate tupdesc\n(eg by calling get_call_result_type not get_func_result_type).\nI also pulled out creation of the JunkFilter that functions.c\nwants (but none of the other callers do); having it in just one\nplace seems simpler. A nice side-effect of these changes is that\nwe can inline SQL functions in some cases where that wasn't\npossible before.\n\nThis could use review/testing, so I'll add it to the next CF.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/b2dbfbd0-8f12-4cca-a0b5-9ee3f00385b4%40phystech.edu",
"msg_date": "Wed, 27 Nov 2019 17:57:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Modernizing SQL functions' result type coercions"
},
{
"msg_contents": "On 28/11/2019 00:57, Tom Lane wrote:\n> Hence, the attached patch rearranges things so that we'll allow\n> any case where the parser's standard coercion logic can find an\n> assignment-level coercion, including typmod coercion if needed.\n> In a green field I might've argued for restricting this to\n> implicit coercions; but since some of the standard binary-compatible\n> casts are assignment-level, that would risk breaking applications\n> that work today. It's really safe enough though, just as assignment\n> coercions are fine in INSERT: there's no possible confusion about\n> which conversion is appropriate.\n\nMakes sense. That's a nice usability improvement.\n\n> This required some adjustments of check_sql_fn_retval's API.\n> I found that pulling out the determination of the result tupdesc\n> and making the callers do that was advisable: in most cases, the\n> caller has more information and can produce a more accurate tupdesc\n> (eg by calling get_call_result_type not get_func_result_type).\n> I also pulled out creation of the JunkFilter that functions.c\n> wants (but none of the other callers do); having it in just one\n> place seems simpler. A nice side-effect of these changes is that\n> we can inline SQL functions in some cases where that wasn't\n> possible before.\n\nIn init_sql_fcache(), one comment says that the junkfilter is \nresponsible for injecting NULLs for dropped columns, and a later comment \nsays that the junk filter gets \"rid of any dropped columns\". That seems \ncontradictory; which is it? Or does \"get rid of\" mean \"set to NULL\"?\n\nOther than that, looks good to me.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 8 Jan 2020 17:06:46 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing SQL functions' result type coercions"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> In init_sql_fcache(), one comment says that the junkfilter is \n> responsible for injecting NULLs for dropped columns, and a later comment \n> says that the junk filter gets \"rid of any dropped columns\". That seems \n> contradictory; which is it? Or does \"get rid of\" mean \"set to NULL\"?\n\nYeah, the second comment is sloppily worded; the first one is more\naccurate.\n\n> Other than that, looks good to me.\n\nThanks for reviewing! I'll fix that comment and push.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 10:22:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing SQL functions' result type coercions"
}
] |
[
{
"msg_contents": "Hi,\n\nI wonder why we don't use the same style for $subject as pg_basebackup\n--progress, that is, use a carriage return instead of a newline after\neach line reporting the number of tuples copied?\n\nAttached patch for that.\n\nThanks,\nAmit",
"msg_date": "Thu, 28 Nov 2019 10:41:14 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "pgbench -i progress output on terminal"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 10:41:14AM +0900, Amit Langote wrote:\n> I wonder why we don't use the same style for $subject as pg_basebackup\n> --progress, that is, use a carriage return instead of a newline after\n> each line reporting the number of tuples copied?\n> \n> Attached patch for that.\n\nI have not checked your patch in details, but +1 for the change.\n--\nMichael",
"msg_date": "Thu, 28 Nov 2019 11:51:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "\nHello Amit,\n\n> I wonder why we don't use the same style for $subject as pg_basebackup\n> --progress, that is, use a carriage return instead of a newline after\n> each line reporting the number of tuples copied?\n\nWhy not.\n\n> Attached patch for that.\n\nI'll look into it. Could you add it to the CF app?\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 28 Nov 2019 08:35:23 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "Hi Fabien,\n\nOn Thu, Nov 28, 2019 at 4:35 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Amit,\n>\n> > I wonder why we don't use the same style for $subject as pg_basebackup\n> > --progress, that is, use a carriage return instead of a newline after\n> > each line reporting the number of tuples copied?\n>\n> Why not.\n>\n> > Attached patch for that.\n>\n> I'll look into it. Could you add it to the CF app?\n\nGreat, done.\n\nhttps://commitfest.postgresql.org/26/2363/\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 28 Nov 2019 16:43:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "> I wonder why we don't use the same style for $subject as pg_basebackup\n> --progress, that is, use a carriage return instead of a newline after\n> each line reporting the number of tuples copied?\n\nPatch applies cleanly, compiles, and works for me.\n\nMy 0.02€:\n\nfprintf -> fputs or fputc to avoid a format parsing, or maybe use %c in \nthe formats.\n\nAs the format is not constant, ISTM that vfprintf should be called, not \nfprintf (even if in practice fprintf does call vfprintf internally).\n\nI'm not sure what the compilers does with isatty(fileno(stderr)), maybe\nthe eol could be precomputed:\n\n char eol = isatty(...) ? '\\r' : '\\n';\n\nand reused afterwards in the loop:\n\n fprintf(stderr, \".... %c\", ..., eol);\n\nthat would remove the added in-loop printing.\n\n-- \nFabien.",
"msg_date": "Fri, 29 Nov 2019 14:13:15 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "Hi Fabien,\n\nOn Fri, Nov 29, 2019 at 10:13 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > I wonder why we don't use the same style for $subject as pg_basebackup\n> > --progress, that is, use a carriage return instead of a newline after\n> > each line reporting the number of tuples copied?\n>\n> Patch applies cleanly, compiles, and works for me.\n\nThanks a lot for the quick review.\n\n> My 0.02€:\n>\n> fprintf -> fputs or fputc to avoid a format parsing, or maybe use %c in\n> the formats.\n>\n> As the format is not constant, ISTM that vfprintf should be called, not\n> fprintf (even if in practice fprintf does call vfprintf internally).\n>\n> I'm not sure what the compilers does with isatty(fileno(stderr)), maybe\n> the eol could be precomputed:\n>\n> char eol = isatty(...) ? '\\r' : '\\n';\n>\n> and reused afterwards in the loop:\n>\n> fprintf(stderr, \".... %c\", ..., eol);\n>\n> that would remove the added in-loop printing.\n\nI have updated the patch based on these observations. Attached v2.\n\nThanks,\nAmit",
"msg_date": "Sat, 30 Nov 2019 02:26:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "\nHello Amit,\n\n> I have updated the patch based on these observations. Attached v2.\n\nPatch v2 applies & compiles cleanly, works for me.\n\nI'm not partial to Hungarian notation conventions, which is not widely \nused elsewhere in pg. I'd suggest eolchar -> eol or line_end or whatever, \nbut others may have different opinion. Maybe having a char variable is a \nrare enough occurence which warrants advertising it.\n\nMaybe use fputc instead of fprintf in the closing output?\n\nI'm unsure about what happens on MacOS and Windows terminal, but if it \nworks for other commands progress options, it is probably all right.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 30 Nov 2019 08:27:51 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "Hi Fabien,\n\nThanks for taking a look again.\n\nOn Sat, Nov 30, 2019 at 4:28 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > I have updated the patch based on these observations. Attached v2.\n>\n> Patch v2 applies & compiles cleanly, works for me.\n>\n> I'm not partial to Hungarian notation conventions, which is not widely\n> used elsewhere in pg. I'd suggest eolchar -> eol or line_end or whatever,\n> but others may have different opinion. Maybe having a char variable is a\n> rare enough occurence which warrants advertising it.\n\nOn second thought, I'm fine with just eol.\n\n> Maybe use fputc instead of fprintf in the closing output?\n\nOK, done.\n\n> I'm unsure about what happens on MacOS and Windows terminal, but if it\n> works for other commands progress options, it is probably all right.\n\nI wrote the v1 patch on CentOS Linux, and now on MacOS. It would be\ngreat if someone can volunteer to test on Windows terminal.\n\nAttached v3.\n\nThanks,\nAmit",
"msg_date": "Sat, 30 Nov 2019 20:28:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "\n> I wrote the v1 patch on CentOS Linux, and now on MacOS. It would be \n> great if someone can volunteer to test on Windows terminal.\n\nI do not have that.\n\n> Attached v3.\n\nPatch applies, compiles, works for me. No further comments.\n\nI switched the patch as ready.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 30 Nov 2019 20:33:25 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "Hi Fabien,\n\nOn Sun, Dec 1, 2019 at 4:33 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Patch applies, compiles, works for me. No further comments.\n>\n> I switched the patch as ready.\n\nThanks a lot.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Mon, 2 Dec 2019 14:30:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 02:30:47PM +0900, Amit Langote wrote:\n> On Sun, Dec 1, 2019 at 4:33 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> Patch applies, compiles, works for me. No further comments.\n>>\n>> I switched the patch as ready.\n> \n> Thanks a lot.\n\nAn issue with the patch as proposed is that its style is different\nthan what pg_rewind and pg_basebackup do in the same cases, but who\ncares :)\n\nBy the way, the first patch sent on this thread had a bug when\nredirecting the output of stderr to a log file because it was printing\na newline for each loop done on naccounts, but you just want to print\na log every 100 rows or 100k rows depending on if the quiet mode is\nused or not, so the log file grew in size with mostly empty lines. v3\ndoes that correctly of course as you add the last character of one log\nline each time the log entry is printed.\n\nAnother question I have is why doing only that for the data\ninitialization phase? Wouldn't it make sense to be consistent with\nthe other tools having --progress and do the same dance in pgbench's\nprintProgressReport()?\n\nNB: Note as well that pgindent complains for one thing, a newline\nbefore the call to isatty.\n--\nMichael",
"msg_date": "Mon, 2 Dec 2019 15:28:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "\n> Another question I have is why doing only that for the data \n> initialization phase? Wouldn't it make sense to be consistent with the \n> other tools having --progress and do the same dance in pgbench's \n> printProgressReport()?\n\nI thought of it but did not suggest it.\n\nWhen running a bench I like seeing the last few seconds status to see the \ndynamic evolution at a glance, and overwriting the previous line would \nhide that.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 2 Dec 2019 14:30:14 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "Thanks for the review.\n\nOn Mon, Dec 2, 2019 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Dec 02, 2019 at 02:30:47PM +0900, Amit Langote wrote:\n> > On Sun, Dec 1, 2019 at 4:33 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >> Patch applies, compiles, works for me. No further comments.\n> >>\n> >> I switched the patch as ready.\n> >\n> > Thanks a lot.\n>\n> An issue with the patch as proposed is that its style is different\n> than what pg_rewind and pg_basebackup do in the same cases, but who\n> cares :)\n\nHow about adding a function, say print_progress_to_stderr(const char\n*fmt,...), exposed to the front-end utilities and use it from\neverywhere? Needless to say that it will contain the check for whether\nstderr points to terminal or a file and print accordingly.\n\n> By the way, the first patch sent on this thread had a bug when\n> redirecting the output of stderr to a log file because it was printing\n> a newline for each loop done on naccounts, but you just want to print\n> a log every 100 rows or 100k rows depending on if the quiet mode is\n> used or not, so the log file grew in size with mostly empty lines.\n\nNaive programming :(\n\n> Another question I have is why doing only that for the data\n> initialization phase? Wouldn't it make sense to be consistent with\n> the other tools having --progress and do the same dance in pgbench's\n> printProgressReport()?\n\nConsidering Fabien's comment on this, we will have to check which\nother instances are printing information that is not very useful to\nlook at line-by-line.\n\n> NB: Note as well that pgindent complains for one thing, a newline\n> before the call to isatty.\n\nFixed.\n\nAttached v4.\n\nThanks,\nAmit",
"msg_date": "Tue, 3 Dec 2019 10:30:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "\n> Attached v4.\n\nPatch applies cleanly, compiles, works for me. Put it back to ready.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 3 Dec 2019 16:55:04 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "On Tue, Dec 03, 2019 at 10:30:35AM +0900, Amit Langote wrote:\n> How about adding a function, say print_progress_to_stderr(const char\n> *fmt,...), exposed to the front-end utilities and use it from\n> everywhere? Needless to say that it will contain the check for whether\n> stderr points to terminal or a file and print accordingly.\n\nI have considered this point, but that does not seem worth the\ncomplication as each tool has its own idea of the log output, its own\nidea of the log output timing and its own idea of when it is necessary\nto print the last newline when finishing to the output with '\\r'.\n\n> Considering Fabien's comment on this, we will have to check which\n> other instances are printing information that is not very useful to\n> look at line-by-line.\n\nThanks, applied the part for the initialization to HEAD. I got to\nthink about Fabien's point and it is true that for pgbench's\n--progress not keeping things on the same line for a terminal has\nadvantages because the data printed is not cumulative: that's a\nsummary of the previous state printed which can be compared.\n\nNote: the patch works on Windows, no problem.\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 11:35:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench -i progress output on terminal"
},
{
"msg_contents": "On Wed, Dec 4, 2019 at 11:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 03, 2019 at 10:30:35AM +0900, Amit Langote wrote:\n> > How about adding a function, say print_progress_to_stderr(const char\n> > *fmt,...), exposed to the front-end utilities and use it from\n> > everywhere? Needless to say that it will contain the check for whether\n> > stderr points to terminal or a file and print accordingly.\n>\n> I have considered this point, but that does not seem worth the\n> complication as each tool has its own idea of the log output, its own\n> idea of the log output timing and its own idea of when it is necessary\n> to print the last newline when finishing to the output with '\\r'.\n\nOkay, seems more trouble than worth to design around all that.\n\n> > Considering Fabien's comment on this, we will have to check which\n> > other instances are printing information that is not very useful to\n> > look at line-by-line.\n>\n> Thanks, applied the part for the initialization to HEAD. I got to\n> think about Fabien's point and it is true that for pgbench's\n> --progress not keeping things on the same line for a terminal has\n> advantages because the data printed is not cumulative: that's a\n> summary of the previous state printed which can be compared.\n>\n> Note: the patch works on Windows, no problem.\n\nThanks for checking and committing the patch.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 4 Dec 2019 15:10:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench -i progress output on terminal"
}
] |
[
{
"msg_contents": "Hi.\n\nI'm using the \"Incremental Materialized View Maintenance\" patch and have\nreported the following issues.\n(https://commitfest.postgresql.org/25/2138/)\n\nTo Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but the syntax\nerror when you run.\n(\"DROP MATERIALIZED VIEW\" command can drop Incremental Materialozed view\nnormally.)\n\n\nramendb=# CREATE INCREMENTAL MATERIALIZED VIEW pref_count AS SELECT pref,\nCOUNT(pref) FROM shops GROUP BY pref;\nSELECT 48\nramendb=# \\d pref_count\n Materialized view \"public.pref_count\"\n Column | Type | Collation | Nullable | Default\n---------------+--------+-----------+----------+---------\n pref | text | | |\n count | bigint | | |\n __ivm_count__ | bigint | | |\n\nramendb=# DROP IN\nINCREMENTAL MATERIALIZED VIEW INDEX\nramendb=# DROP INCREMENTAL MATERIALIZED VIEW pref_count;\n2019-11-27 11:51:03.916 UTC [9759] ERROR: syntax error at or near\n\"INCREMENTAL\" at character 6\n2019-11-27 11:51:03.916 UTC [9759] STATEMENT: DROP INCREMENTAL\nMATERIALIZED VIEW pref_count;\nERROR: syntax error at or near \"INCREMENTAL\"\nLINE 1: DROP INCREMENTAL MATERIALIZED VIEW pref_count;\n ^\nramendb=# DROP MATERIALIZED VIEW pref_count ;\nDROP MATERIALIZED VIEW\nramendb=#\n\n\nRegard.\n\nHi.I'm using the \"Incremental Materialized View Maintenance\" patch and have reported the following issues.(https://commitfest.postgresql.org/25/2138/)To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but the syntax error when you run.(\"DROP MATERIALIZED VIEW\" command can drop Incremental Materialozed view normally.)ramendb=# CREATE INCREMENTAL MATERIALIZED VIEW pref_count AS SELECT pref, COUNT(pref) FROM shops GROUP BY pref;SELECT 48ramendb=# \\d pref_count Materialized view \"public.pref_count\" Column | Type | Collation | Nullable | Default---------------+--------+-----------+----------+--------- pref | text | | | count | bigint | | | __ivm_count__ | bigint | | |ramendb=# DROP ININCREMENTAL MATERIALIZED VIEW INDEXramendb=# DROP INCREMENTAL MATERIALIZED VIEW pref_count;2019-11-27 11:51:03.916 UTC [9759] ERROR: syntax error at or near \"INCREMENTAL\" at character 62019-11-27 11:51:03.916 UTC [9759] STATEMENT: DROP INCREMENTAL MATERIALIZED VIEW pref_count;ERROR: syntax error at or near \"INCREMENTAL\"LINE 1: DROP INCREMENTAL MATERIALIZED VIEW pref_count; ^ramendb=# DROP MATERIALIZED VIEW pref_count ;DROP MATERIALIZED VIEWramendb=#Regard.",
"msg_date": "Thu, 28 Nov 2019 13:00:05 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": true,
"msg_subject": "To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but the\n syntax error when you run."
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 01:00:05PM +0900, nuko yokohama wrote:\n> To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but the syntax\n> error when you run.\n> (\"DROP MATERIALIZED VIEW\" command can drop Incremental Materialozed view\n> normally.)\n\nIt seems to me that this is just an issue with the tab completion the\npatch is adding. When reviewing the patch, could you just report such\nissues directly on the thread of the patch? Thanks!\n--\nMichael",
"msg_date": "Thu, 28 Nov 2019 13:05:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but\n the syntax error when you run."
},
{
"msg_contents": "Hello nuko-san, \n\nThank you for your review!\n\nAs Michael commentted, we would like to discuss this on the thread\nof the patch, so I quote your review in the following post.\n\nhttps://www.postgresql.org/message-id/20191129154513.943f4ef05896d7b9d3fed69f%40sraoss.co.jp\n\nRegards,\nYugo Nagata\n\nOn Thu, 28 Nov 2019 13:05:33 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Nov 28, 2019 at 01:00:05PM +0900, nuko yokohama wrote:\n> > To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but the syntax\n> > error when you run.\n> > (\"DROP MATERIALIZED VIEW\" command can drop Incremental Materialozed view\n> > normally.)\n> \n> It seems to me that this is just an issue with the tab completion the\n> patch is adding. When reviewing the patch, could you just report such\n> issues directly on the thread of the patch? Thanks!\n> --\n> Michael\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 15:50:06 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but\n the syntax error when you run."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWe just want to introduce another POC for vectorized execution engine\nhttps://github.com/zhangh43/vectorize_engine and want to get some feedback\non the idea.\n\nThe basic idea is to extend the TupleTableSlot and introduce\nVectorTupleTableSlot, which is an array of datums organized by projected\ncolumns. The array of datum per column is continuous in memory. This makes\nthe expression evaluation cache friendly and SIMD could be utilized. We\nhave refactored the SeqScanNode and AggNode to support VectorTupleTableSlot\ncurrently.\n\nBelow are features in our design.\n1. Pure extension. We don't hack any code into postgres kernel.\n\n2. CustomScan node. We use CustomScan framework to replace original\nexecutor node such as SeqScan, Agg etc. Based on CustomScan, we could\nextend the CustomScanState, BeginCustomScan(), ExecCustomScan(),\nEndCustomScan() interface to implement vectorize executor logic.\n\n3. Post planner hook. After plan is generated, we use plan_tree_walker to\ntraverse the plan tree and check whether it could be vectorized. If yes,\nthe non-vectorized nodes (SeqScan, Agg etc.) are replaced with vectorized\nnodes (in form of CustomScan node) and use vectorized executor. If no, we\nwill revert to the original plan and use non-vectorized executor. In future\nthis part could be enhanced, for example, instead of revert to original\nplan when some nodes cannot be vectorized, we could add Batch/UnBatch node\nto generate a plan with both vectorized as well as non-vectorized node.\n\n4. Support implement new vectorized executor node gradually. We currently\nonly vectorized SeqScan and Agg but other queries which including Join\ncould also be run when vectorize extension is enabled.\n\n5. Inherit original executor code. Instead of rewriting the whole executor,\nwe choose a more smooth method to modify current Postgres executor node and\nmake it vectorized. We copy the current executor node's c file into our\nextension, and add vectorize logic based on it. When Postgres enhance its\nexecutor, we could relatively easily merge them back. We want to know\nwhether this is a good way to write vectorized executor extension?\n\n6. Pluggable storage. Postgres has supported pluggable storage now.\nTupleTableSlot is refactored as abstract struct TupleTableSlotOps.\nVectorTupleTableSlot could be implemented under this framework when we\nupgrade the extension to latest PG.\n\nWe run the TPCH(10G) benchmark and result of Q1 is 50sec(PG) V.S.\n28sec(Vectorized PG). Performance gain can be improved by:\n1. heap tuple deform occupy many CPUs. We will try zedstore in future,\nsince vectorized executor is more compatible with column store.\n\n2. vectorized agg is not fully vectorized and we have many optimization\nneed to do. For example, batch compute the hash value, optimize hash table\nfor vectorized HashAgg.\n\n3. Conversion cost from Datum to actual type and vice versa is also high,\nfor example DatumGetFloat4 & Float4GetDatum. One optimization maybe that we\nstore the actual type in VectorTupleTableSlot directly, instead of an array\nof datums.\n\nRelated works:\n1. VOPS is a vectorized execution extension. Link:\nhttps://github.com/postgrespro/vops.\nIt doesn't use custom scan framework and use UDF to do the vectorized\noperation e.g. it changes the SQL syntax to do aggregation.\n\n2. Citus vectorized executor is another POC. Link:\nhttps://github.com/citusdata/postgres_vectorization_test.\nIt uses ExecutorRun_hook to run the vectorized executor and uses cstore fdw\nto support column storage.\n\nNote that the vectorized executor engine is based on PG9.6 now, but it\ncould be ported to master / zedstore with some effort. We would appreciate\nsome feedback before moving further in that direction.\n\nThanks,\nHubert Zhang, Gang Xiong, Ning Yu, Asim Praveen\n\nHi hackers,We just want to introduce another POC for vectorized execution engine https://github.com/zhangh43/vectorize_engine and want to get some feedback on the idea.The basic idea is to extend the TupleTableSlot and introduce VectorTupleTableSlot, which is an array of datums organized by projected columns. The array of datum per column is continuous in memory. This makes the expression evaluation cache friendly and SIMD could be utilized. We have refactored the SeqScanNode and AggNode to support VectorTupleTableSlot currently.Below are features in our design.1. Pure extension. We don't hack any code into postgres kernel.2. CustomScan node. We use CustomScan framework to replace original executor node such as SeqScan, Agg etc. Based on CustomScan, we could extend the CustomScanState, BeginCustomScan(), ExecCustomScan(), EndCustomScan() interface to implement vectorize executor logic.3. Post planner hook. After plan is generated, we use plan_tree_walker to traverse the plan tree and check whether it could be vectorized. If yes, the non-vectorized nodes (SeqScan, Agg etc.) are replaced with vectorized nodes (in form of CustomScan node) and use vectorized executor. If no, we will revert to the original plan and use non-vectorized executor. In future this part could be enhanced, for example, instead of revert to original plan when some nodes cannot be vectorized, we could add Batch/UnBatch node to generate a plan with both vectorized as well as non-vectorized node. 4. Support implement new vectorized executor node gradually. We currently only vectorized SeqScan and Agg but other queries which including Join could also be run when vectorize extension is enabled.5. Inherit original executor code. Instead of rewriting the whole executor, we choose a more smooth method to modify current Postgres executor node and make it vectorized. We copy the current executor node's c file into our extension, and add vectorize logic based on it. When Postgres enhance its executor, we could relatively easily merge them back. We want to know whether this is a good way to write vectorized executor extension?6. Pluggable storage. Postgres has supported pluggable storage now. TupleTableSlot is refactored as abstract struct TupleTableSlotOps. VectorTupleTableSlot could be implemented under this framework when we upgrade the extension to latest PG.We run the TPCH(10G) benchmark and result of Q1 is 50sec(PG) V.S. 28sec(Vectorized PG). Performance gain can be improved by:1. heap tuple deform occupy many CPUs. We will try zedstore in future, since vectorized executor is more compatible with column store.2. vectorized agg is not fully vectorized and we have many optimization need to do. For example, batch compute the hash value, optimize hash table for vectorized HashAgg. 3. Conversion cost from Datum to actual type and vice versa is also high, for example DatumGetFloat4 & Float4GetDatum. One optimization maybe that we store the actual type in VectorTupleTableSlot directly, instead of an array of datums.Related works:1. VOPS is a vectorized execution extension. Link: https://github.com/postgrespro/vops.It doesn't use custom scan framework and use UDF to do the vectorized operation e.g. it changes the SQL syntax to do aggregation.2. Citus vectorized executor is another POC. Link: https://github.com/citusdata/postgres_vectorization_test.It uses ExecutorRun_hook to run the vectorized executor and uses cstore fdw to support column storage.Note that the vectorized executor engine is based on PG9.6 now, but it could be ported to master / zedstore with some effort. We would appreciate some feedback before moving further in that direction.Thanks, Hubert Zhang, Gang Xiong, Ning Yu, Asim Praveen",
"msg_date": "Thu, 28 Nov 2019 17:23:59 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Yet another vectorized engine"
},
{
"msg_contents": "On 28.11.2019 12:23, Hubert Zhang wrote:\n> Hi hackers,\n>\n> We just want to introduce another POC for vectorized execution engine \n> https://github.com/zhangh43/vectorize_engine and want to get some \n> feedback on the idea.\n>\n> The basic idea is to extend the TupleTableSlot and introduce \n> VectorTupleTableSlot, which is an array of datums organized by \n> projected columns. The array of datum per column is continuous in \n> memory. This makes the expression evaluation cache friendly and SIMD \n> could be utilized. We have refactored the SeqScanNode and AggNode to \n> support VectorTupleTableSlot currently.\n>\n> Below are features in our design.\n> 1. Pure extension. We don't hack any code into postgres kernel.\n>\n> 2. CustomScan node. We use CustomScan framework to replace original \n> executor node such as SeqScan, Agg etc. Based on CustomScan, we could \n> extend the CustomScanState, BeginCustomScan(), ExecCustomScan(), \n> EndCustomScan() interface to implement vectorize executor logic.\n>\n> 3. Post planner hook. After plan is generated, we use plan_tree_walker \n> to traverse the plan tree and check whether it could be vectorized. If \n> yes, the non-vectorized nodes (SeqScan, Agg etc.) are replaced with \n> vectorized nodes (in form of CustomScan node) and use vectorized \n> executor. If no, we will revert to the original plan and use \n> non-vectorized executor. In future this part could be enhanced, for \n> example, instead of revert to original plan when some nodes cannot be \n> vectorized, we could add Batch/UnBatch node to generate a plan with \n> both vectorized as well as non-vectorized node.\n>\n> 4. Support implement new vectorized executor node gradually. We \n> currently only vectorized SeqScan and Agg but other queries which \n> including Join could also be run when vectorize extension is enabled.\n>\n> 5. Inherit original executor code. Instead of rewriting the whole \n> executor, we choose a more smooth method to modify current Postgres \n> executor node and make it vectorized. We copy the current executor \n> node's c file into our extension, and add vectorize logic based on it. \n> When Postgres enhance its executor, we could relatively easily merge \n> them back. We want to know whether this is a good way to write \n> vectorized executor extension?\n>\n> 6. Pluggable storage. Postgres has supported pluggable storage now. \n> TupleTableSlot is refactored as abstract struct TupleTableSlotOps. \n> VectorTupleTableSlot could be implemented under this framework when we \n> upgrade the extension to latest PG.\n>\n> We run the TPCH(10G) benchmark and result of Q1 is 50sec(PG) V.S. \n> 28sec(Vectorized PG). Performance gain can be improved by:\n> 1. heap tuple deform occupy many CPUs. We will try zedstore in future, \n> since vectorized executor is more compatible with column store.\n>\n> 2. vectorized agg is not fully vectorized and we have many \n> optimization need to do. For example, batch compute the hash value, \n> optimize hash table for vectorized HashAgg.\n>\n> 3. Conversion cost from Datum to actual type and vice versa is also \n> high, for example DatumGetFloat4 & Float4GetDatum. One optimization \n> maybe that we store the actual type in VectorTupleTableSlot directly, \n> instead of an array of datums.\n>\n> Related works:\n> 1. VOPS is a vectorized execution extension. Link: \n> https://github.com/postgrespro/vops.\n> It doesn't use custom scan framework and use UDF to do the vectorized \n> operation e.g. it changes the SQL syntax to do aggregation.\n>\n> 2. Citus vectorized executor is another POC. Link: \n> https://github.com/citusdata/postgres_vectorization_test.\n> It uses ExecutorRun_hook to run the vectorized executor and uses \n> cstore fdw to support column storage.\n>\n> Note that the vectorized executor engine is based on PG9.6 now, but it \n> could be ported to master / zedstore with some effort. We would \n> appreciate some feedback before moving further in that direction.\n>\n> Thanks,\n> Hubert Zhang, Gang Xiong, Ning Yu, Asim Praveen\n\nHi,\n\nI think that vectorized executor is absolutely necessary thing for \nPostgres, especially taken in account that now we have columnar store \nprototype (zedstore).\nTo take all advantages of columnar store we definitely need a vectorized \nexecutor.\n\nBut I do not completely understand why you are proposing to implement it \nas extension.\nYes, custom nodes makes it possible to provide vector execution without \naffecting Postgres core.\nBut for efficient integration of zedstore and vectorized executor we \nneed to extend table-AM (VectorTupleTableSlot and correspondent scan \nfunctions).\nCertainly it is easier to contribute vectorized executor as extension, \nbut sooner or later I think it should be added to Postgres core.\n\nAs far as I understand you already have some prototype implementation \n(otherwise how you got the performance results)?\nIf so, are you planning to publish it or you think that executor should \nbe developed from scratch?\n\nSome my concerns based on VOPS experience:\n\n1. Vertical (columnar) model is preferable for some kind of queries, but \nthere are some classes of queries for which it is less efficient.\nMoreover, data is used to be imported in the database in row format. \nInserting it in columnar store record-by-record is very inefficient.\nSo you need some kind of bulk loader which will be able to buffer input \ndata before loading it in columnar store.\nActually this problem it is more related with data model rather than \nvectorized executor. But what I want to express here is that it may be \nbetter to have both representation (horizontal and vertical)\nand let optimizer choose most efficient one for particular query.\n\n2. Columnar store and vectorized executor are most efficient for query \nlike \"select sum(x) from T where ...\".\nUnfortunately such simple queries are rarely used in real life. Usually \nanalytic queries contain group-by and joins.\nAnd here vertical model is not always optimal (you have to reconstruct \nrows from columns to perform join or grouping).\nTo provide efficient execution of queries you may need to create \nmultiple different projections of the same data (sorted by different \nsubset of attributes).\nThis is why Vertica (one of the most popular columnar store DBMS) is \nsupporting projections.\nThe same can be done in VOPS: using create_projection function you can \nspecify which attributes should be scalar (grouping attributes) and \nwhich vectorized.\nIn this case you can perform grouping and joins using standard Postgres \nexecutor, while perform vectorized operations for filtering and \naccumulating aggregates.\n\nThis is why Q1 is 20 times faster in VOPS and not 2 times as in your \nprototype.\nSo I think that columnar store should make it possible to maintain \nseveral projections of table and optimizer should be able to \nautomatically choose one of them for particular query.\nDefinitely synchronization of projections is challenged problem. \nFortunately OLAP usually not require most recent data.\n\n3. I wonder if vectorized executor should support only built-in types \nand predefined operators? Or it should be able to work with any user \ndefined types, operators and aggregates?\nCertainly it is much easier to support only built-in scalar types. But \nit contradicts to open and extensible nature of Postgres.\n\n4. Did you already think about format of storing data in \nVectorTupleTableSlot? Should it be array of Datum? Or we need to \nrepresent vector in more low level format (for example\nas array of floats for real4 type)?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 28.11.2019 12:23, Hubert Zhang\n wrote:\n\n\n\nHi hackers,\n \n\nWe just want to introduce another POC for vectorized\n execution engine https://github.com/zhangh43/vectorize_engine and\n want to get some feedback on the idea.\n\n\nThe basic idea is to extend the TupleTableSlot and\n introduce VectorTupleTableSlot, which is an array of datums\n organized by projected columns. The array of datum per column\n is continuous in memory. This makes the expression evaluation\n cache friendly and SIMD could be utilized. We have refactored\n the SeqScanNode and AggNode to support VectorTupleTableSlot\n currently.\n\n\nBelow are features in our design.\n 1. Pure extension. We don't hack any code into postgres\n kernel.\n\n 2. CustomScan node. We use CustomScan framework to replace\n original executor node such as SeqScan, Agg etc. Based on\n CustomScan, we could extend the CustomScanState,\n BeginCustomScan(), ExecCustomScan(), EndCustomScan() interface\n to implement vectorize executor logic.\n\n 3. Post planner hook. After plan is generated, we use\n plan_tree_walker to traverse the plan tree and check whether\n it could be vectorized. If yes, the non-vectorized nodes\n (SeqScan, Agg etc.) are replaced with vectorized nodes (in\n form of CustomScan node) and use vectorized executor. If no,\n we will revert to the original plan and use non-vectorized\n executor. In future this part could be enhanced, for example,\n instead of revert to original plan when some nodes cannot be\n vectorized, we could add Batch/UnBatch node to generate a plan\n with both vectorized as well as non-vectorized node. \n\n\n4. Support implement new vectorized executor node\n gradually. We currently only vectorized SeqScan and Agg but\n other queries which including Join could also be run when\n vectorize extension is enabled.\n\n 5. Inherit original executor code. Instead of rewriting the\n whole executor, we choose a more smooth method to modify\n current Postgres executor node and make it vectorized. We copy\n the current executor node's c file into our extension, and add\n vectorize logic based on it. When Postgres enhance its\n executor, we could relatively easily merge them back. We want\n to know whether this is a good way to write vectorized\n executor extension?\n\n 6. Pluggable storage. Postgres has supported pluggable storage\n now. TupleTableSlot is refactored as abstract struct\n TupleTableSlotOps. VectorTupleTableSlot could be implemented\n under this framework when we upgrade the extension to latest\n PG.\n\n\n\nWe run the TPCH(10G) benchmark and result of Q1 is\n 50sec(PG) V.S. 28sec(Vectorized PG). Performance gain can be\n improved by:\n1. heap tuple deform occupy many CPUs. We will try zedstore\n in future, since vectorized executor is more compatible with\n column store.\n\n\n2. vectorized agg is not fully vectorized and we have many\n optimization need to do. For example, batch compute the hash\n value, optimize hash table for vectorized HashAgg. \n\n\n3. Conversion cost from Datum to actual type and vice versa\n is also high, for example DatumGetFloat4 & Float4GetDatum.\n One optimization maybe that we store the actual type in\n VectorTupleTableSlot directly, instead of an array of datums.\n\n\nRelated works:\n1. VOPS is a vectorized execution extension. Link: https://github.com/postgrespro/vops.\n It doesn't use custom scan framework and use UDF to do the\n vectorized operation e.g. it changes the SQL syntax to do\n aggregation.\n\n\n2. Citus vectorized executor is another POC. Link: https://github.com/citusdata/postgres_vectorization_test.\n It uses ExecutorRun_hook to run the vectorized executor and\n uses cstore fdw to support column storage.\n\n\n\nNote that the vectorized executor engine is based on PG9.6\n now, but it could be ported to master / zedstore with some\n effort. We would appreciate some feedback before moving\n further in that direction.\n\n\n\n\n\nThanks, \nHubert Zhang, Gang Xiong, Ning Yu, Asim Praveen\n\n\n\n\n\n\n Hi,\n\n I think that vectorized executor is absolutely necessary thing for\n Postgres, especially taken in account that now we have columnar\n store prototype (zedstore).\n To take all advantages of columnar store we definitely need a\n vectorized executor.\n\n But I do not completely understand why you are proposing to\n implement it as extension.\n Yes, custom nodes makes it possible to provide vector execution\n without affecting Postgres core.\n But for efficient integration of zedstore and vectorized executor we\n need to extend table-AM (VectorTupleTableSlot and correspondent scan\n functions).\n Certainly it is easier to contribute vectorized executor as\n extension, but sooner or later I think it should be added to\n Postgres core.\n\n As far as I understand you already have some prototype\n implementation (otherwise how you got the performance results)?\n If so, are you planning to publish it or you think that executor\n should be developed from scratch?\n\n Some my concerns based on VOPS experience:\n\n 1. Vertical (columnar) model is preferable for some kind of queries,\n but there are some classes of queries for which it is less\n efficient.\n Moreover, data is used to be imported in the database in row format.\n Inserting it in columnar store record-by-record is very inefficient.\n So you need some kind of bulk loader which will be able to buffer\n input data before loading it in columnar store.\n Actually this problem it is more related with data model rather than\n vectorized executor. But what I want to express here is that it may\n be better to have both representation (horizontal and vertical)\n and let optimizer choose most efficient one for particular query.\n\n 2. Columnar store and vectorized executor are most efficient for\n query like \"select sum(x) from T where ...\".\n Unfortunately such simple queries are rarely used in real life.\n Usually analytic queries contain group-by and joins.\n And here vertical model is not always optimal (you have to\n reconstruct rows from columns to perform join or grouping).\n To provide efficient execution of queries you may need to create\n multiple different projections of the same data (sorted by different\n subset of attributes).\n This is why Vertica (one of the most popular columnar store DBMS) is\n supporting projections.\n The same can be done in VOPS: using create_projection function you\n can specify which attributes should be scalar (grouping attributes)\n and which vectorized.\n In this case you can perform grouping and joins using standard\n Postgres executor, while perform vectorized operations for filtering\n and accumulating aggregates.\n\n This is why Q1 is 20 times faster in VOPS and not 2 times as in your\n prototype.\n So I think that columnar store should make it possible to maintain\n several projections of table and optimizer should be able to\n automatically choose one of them for particular query.\n Definitely synchronization of projections is challenged problem.\n Fortunately OLAP usually not require most recent data.\n\n 3. I wonder if vectorized executor should support only built-in\n types and predefined operators? Or it should be able to work with\n any user defined types, operators and aggregates?\n Certainly it is much easier to support only built-in scalar types.\n But it contradicts to open and extensible nature of Postgres.\n\n 4. Did you already think about format of storing data in\n VectorTupleTableSlot? Should it be array of Datum? Or we need to\n represent vector in more low level format (for example\n as array of floats for real4 type)?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 28 Nov 2019 19:08:57 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 05:23:59PM +0800, Hubert Zhang wrote:\n> Note that the vectorized executor engine is based on PG9.6 now, but it\n> could be ported to master / zedstore with some effort. We would appreciate\n> some feedback before moving further in that direction.\n\nThere has been no feedback yet, unfortunately. The patch does not\napply anymore, so a rebase is necessary. For now I am moving the\npatch to next CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 11:05:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Hi Konstantin,\nThanks for your reply.\n\nOn Fri, Nov 29, 2019 at 12:09 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n> On 28.11.2019 12:23, Hubert Zhang wrote:\n>\n> We just want to introduce another POC for vectorized execution engine\n> https://github.com/zhangh43/vectorize_engine and want to get some\n> feedback on the idea.\n>\n> But I do not completely understand why you are proposing to implement it\n> as extension.\n> Yes, custom nodes makes it possible to provide vector execution without\n> affecting Postgres core.\n> But for efficient integration of zedstore and vectorized executor we need\n> to extend table-AM (VectorTupleTableSlot and correspondent scan functions).\n> Certainly it is easier to contribute vectorized executor as extension, but\n> sooner or later I think it should be added to Postgres core.\n>\n> As far as I understand you already have some prototype implementation\n> (otherwise how you got the performance results)?\n> If so, are you planning to publish it or you think that executor should be\n> developed from scratch?\n>\n\nThe prototype extension is at https://github.com/zhangh43/vectorize_engine\n I agree vectorized executor should be added to Postgres core some days.\nBut it is such a huge feature and need to change from not only the extended\ntable-AM you mentioned and also every executor node , such as Agg,Join,Sort\nnode etc. What's more, the expression evaluation function and aggregate's\ntransition function, combine function etc. We all need to supply a\nvectorized version for them. Hence, implementing it as an extension first\nand if it is popular among community and stable, we could merge it into\nPostgres core whenever we want.\n\nWe do want to get some feedback from the community about CustomScan.\nCustomScan is just an abstract layer. It's typically used to support user\ndefined scan node, but some other PG extensions(pgstorm) have already used\nit as a general CustomNode e.g. Agg, Join etc. Since vectorized engine need\nto support vectorized processing in all executor node, follow the above\nidea, our choice is to use CustomScan.\n\n\n> Some my concerns based on VOPS experience:\n>\n\n> 1. Vertical (columnar) model is preferable for some kind of queries, but\n> there are some classes of queries for which it is less efficient.\n> Moreover, data is used to be imported in the database in row format.\n> Inserting it in columnar store record-by-record is very inefficient.\n> So you need some kind of bulk loader which will be able to buffer input\n> data before loading it in columnar store.\n> Actually this problem it is more related with data model rather than\n> vectorized executor. But what I want to express here is that it may be\n> better to have both representation (horizontal and vertical)\n> and let optimizer choose most efficient one for particular query.\n>\n>\nYes, in general, for OLTP queries, row format is better and for OLAP\nqueries column format is better.\nAs for storage type(or data model), I think DBA should choose row or column\nstore to use for a specific table.\nAs for executor, it's a good idea to let optimizer to choose based on cost.\nIt is a long term goal and our extension now will fallback to original row\nexecutor for Insert,Update,IndexScan cases in a rough way.\nWe want our extension could be enhanced in a gradual way.\n\n\n> 2. Columnar store and vectorized executor are most efficient for query\n> like \"select sum(x) from T where ...\".\n> Unfortunately such simple queries are rarely used in real life. Usually\n> analytic queries contain group-by and joins.\n> And here vertical model is not always optimal (you have to reconstruct\n> rows from columns to perform join or grouping).\n> To provide efficient execution of queries you may need to create multiple\n> different projections of the same data (sorted by different subset of\n> attributes).\n> This is why Vertica (one of the most popular columnar store DBMS) is\n> supporting projections.\n> The same can be done in VOPS: using create_projection function you can\n> specify which attributes should be scalar (grouping attributes) and which\n> vectorized.\n> In this case you can perform grouping and joins using standard Postgres\n> executor, while perform vectorized operations for filtering and\n> accumulating aggregates.\n>\n\n> This is why Q1 is 20 times faster in VOPS and not 2 times as in your\n> prototype.\n> So I think that columnar store should make it possible to maintain several\n> projections of table and optimizer should be able to automatically choose\n> one of them for particular query.\n> Definitely synchronization of projections is challenged problem.\n> Fortunately OLAP usually not require most recent data.\n>\n\nProjection in Vertica is useful. I tested, VOPS is really faster. It could\nbe nice if you could contribute it to PG core. Our extension is aimed to\nnot change any Postgres code as well as user's sql and existing table.\nWe will continue to optimize our vectorize implementation. Vectorized\nhashagg need vectorized hashtable implementation, e.g. calculate hashkey in\na batched way, probe hashtable in a batched way. Original hashtable in PG\nis not a vectorised hash table of course.\n\n\n> 3. I wonder if vectorized executor should support only built-in types and\n> predefined operators? Or it should be able to work with any user defined\n> types, operators and aggregates?\n> Certainly it is much easier to support only built-in scalar types. But it\n> contradicts to open and extensible nature of Postgres.\n>\n\nYes, we should support user defined type. This could be done by introducing\na register layer which mapping the row type with vector type. E.g.\nint4->vint4 and also for each operator.\n\n4. Did you already think about format of storing data in\n> VectorTupleTableSlot? Should it be array of Datum? Or we need to represent\n> vector in more low level format (for example\n> as array of floats for real4 type)?\n>\n\nOur perf results show that datum conversion is not effective, and we\nprepare to implement to datum array as low level format array as you\nmentioned.\n-- \nThanks\n\nHubert Zhang\n\nHi Konstantin,Thanks for your reply.On Fri, Nov 29, 2019 at 12:09 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\nOn 28.11.2019 12:23, Hubert Zhang\n wrote:\nWe just want to introduce another POC for vectorized\n execution engine https://github.com/zhangh43/vectorize_engine and\n want to get some feedback on the idea.\n But I do not completely understand why you are proposing to\n implement it as extension.\n Yes, custom nodes makes it possible to provide vector execution\n without affecting Postgres core.\n But for efficient integration of zedstore and vectorized executor we\n need to extend table-AM (VectorTupleTableSlot and correspondent scan\n functions).\n Certainly it is easier to contribute vectorized executor as\n extension, but sooner or later I think it should be added to\n Postgres core.\n\n As far as I understand you already have some prototype\n implementation (otherwise how you got the performance results)?\n If so, are you planning to publish it or you think that executor\n should be developed from scratch?The prototype extension is at https://github.com/zhangh43/vectorize_engine I agree vectorized executor should be added to Postgres core some days. But it is such a huge feature and need to change from not only the extended table-AM you mentioned and also every executor node , such as Agg,Join,Sort node etc. What's more, the expression evaluation function and aggregate's transition function, combine function etc. We all need to supply a vectorized version for them. Hence, implementing it as an extension first and if it is popular among community and stable, we could merge it into Postgres core whenever we want.We do want to get some feedback from the community about CustomScan. CustomScan is just an abstract layer. It's typically used to support user defined scan node, but some other PG extensions(pgstorm) have already used it as a general CustomNode e.g. Agg, Join etc. Since vectorized engine need to support vectorized processing in all executor node, follow the above idea, our choice is to use CustomScan. Some my concerns based on VOPS experience:\n\n 1. Vertical (columnar) model is preferable for some kind of queries,\n but there are some classes of queries for which it is less\n efficient.\n Moreover, data is used to be imported in the database in row format.\n Inserting it in columnar store record-by-record is very inefficient.\n So you need some kind of bulk loader which will be able to buffer\n input data before loading it in columnar store.\n Actually this problem it is more related with data model rather than\n vectorized executor. But what I want to express here is that it may\n be better to have both representation (horizontal and vertical)\n and let optimizer choose most efficient one for particular query.\nYes, in general, for OLTP queries, row format is better and for OLAP queries column format is better.As for storage type(or data model), I think DBA should choose row or column store to use for a specific table.As for executor, it's a good idea to let optimizer to choose based on cost. It is a long term goal and our extension now will fallback to original row executor for Insert,Update,IndexScan cases in a rough way.We want our extension could be enhanced in a gradual way. \n 2. Columnar store and vectorized executor are most efficient for\n query like \"select sum(x) from T where ...\".\n Unfortunately such simple queries are rarely used in real life.\n Usually analytic queries contain group-by and joins.\n And here vertical model is not always optimal (you have to\n reconstruct rows from columns to perform join or grouping).\n To provide efficient execution of queries you may need to create\n multiple different projections of the same data (sorted by different\n subset of attributes).\n This is why Vertica (one of the most popular columnar store DBMS) is\n supporting projections.\n The same can be done in VOPS: using create_projection function you\n can specify which attributes should be scalar (grouping attributes)\n and which vectorized.\n In this case you can perform grouping and joins using standard\n Postgres executor, while perform vectorized operations for filtering\n and accumulating aggregates. \n\n This is why Q1 is 20 times faster in VOPS and not 2 times as in your\n prototype.\n So I think that columnar store should make it possible to maintain\n several projections of table and optimizer should be able to\n automatically choose one of them for particular query.\n Definitely synchronization of projections is challenged problem.\n Fortunately OLAP usually not require most recent data.Projection in Vertica is useful. I tested, VOPS is really faster. It could be nice if you could contribute it to PG core. Our extension is aimed to not change any Postgres code as well as user's sql and existing table.We will continue to optimize our vectorize implementation. Vectorized hashagg need vectorized hashtable implementation, e.g. calculate hashkey in a batched way, probe hashtable in a batched way. Original hashtable in PG is not a vectorised hash table of course. \n 3. I wonder if vectorized executor should support only built-in\n types and predefined operators? Or it should be able to work with\n any user defined types, operators and aggregates?\n Certainly it is much easier to support only built-in scalar types.\n But it contradicts to open and extensible nature of Postgres.Yes, we should support user defined type. This could be done by introducing a register layer which mapping the row type with vector type. E.g. int4->vint4 and also for each operator. \n 4. Did you already think about format of storing data in\n VectorTupleTableSlot? Should it be array of Datum? Or we need to\n represent vector in more low level format (for example\n as array of floats for real4 type)?Our perf results show that datum conversion is not effective, and we prepare to implement to datum array as low level format array as you mentioned.-- ThanksHubert Zhang",
"msg_date": "Mon, 2 Dec 2019 09:15:32 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On Sun, Dec 1, 2019 at 10:05 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Nov 28, 2019 at 05:23:59PM +0800, Hubert Zhang wrote:\n> > Note that the vectorized executor engine is based on PG9.6 now, but it\n> > could be ported to master / zedstore with some effort. We would\n> appreciate\n> > some feedback before moving further in that direction.\n>\n> There has been no feedback yet, unfortunately. The patch does not\n> apply anymore, so a rebase is necessary. For now I am moving the\n> patch to next CF, waiting on author.\n> --\n> Michael\n>\n\nThanks we'll rebase and resubmit the patch.\n-- \nThanks\n\nHubert Zhang\n\nOn Sun, Dec 1, 2019 at 10:05 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Nov 28, 2019 at 05:23:59PM +0800, Hubert Zhang wrote:\n> Note that the vectorized executor engine is based on PG9.6 now, but it\n> could be ported to master / zedstore with some effort. We would appreciate\n> some feedback before moving further in that direction.\n\nThere has been no feedback yet, unfortunately. The patch does not\napply anymore, so a rebase is necessary. For now I am moving the\npatch to next CF, waiting on author.\n--\nMichael\nThanks we'll rebase and resubmit the patch.-- ThanksHubert Zhang",
"msg_date": "Mon, 2 Dec 2019 09:42:15 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On 02.12.2019 4:15, Hubert Zhang wrote:\n>\n> The prototype extension is at https://github.com/zhangh43/vectorize_engine\n\nI am very sorry, that I have no followed this link.\nFew questions concerning your design decisions:\n\n1. Will it be more efficient to use native arrays in vtype instead of \narray of Datum? I think it will allow compiler to generate more \nefficient code for operations with float4 and int32 types.\nIt is possible to use union to keep fixed size of vtype.\n2. Why VectorTupleSlot contains array (batch) of heap tuples rather than \nvectors (array of vtype)?\n3. Why you have to implement your own plan_tree_mutator and not using \nexpression_tree_mutator?\n4. As far as I understand you now always try to replace SeqScan with \nyour custom vectorized scan. But it makes sense only if there are quals \nfor this scan or aggregation is performed.\nIn other cases batch+unbatch just adds extra overhead, doesn't it?\n5. Throwing and catching exception for queries which can not be \nvectorized seems to be not the safest and most efficient way of handling \nsuch cases.\nMay be it is better to return error code in plan_tree_mutator and \npropagate this error upstairs?\n6. Have you experimented with different batch size? I have done similar \nexperiments in VOPS and find out that tile size larger than 128 are not \nproviding noticable increase of performance.\nYou are currently using batch size 1024 which is significantly larger \nthan typical amount of tuples on one page.\n7. How vectorized scan can be combined with parallel execution (it is \nalready supported in9.6, isn't it?)\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 02.12.2019 4:15, Hubert Zhang wrote:\n\n\n\n\n\nThe prototype extension is at https://github.com/zhangh43/vectorize_engine\n\n\n\n\n I am very sorry, that I have no followed this link. \n Few questions concerning your design decisions:\n\n 1. Will it be more efficient to use native arrays in vtype instead\n of array of Datum? I think it will allow compiler to generate more\n efficient code for operations with float4 and int32 types.\n It is possible to use union to keep fixed size of vtype.\n 2. Why VectorTupleSlot contains array (batch) of heap tuples rather\n than vectors (array of vtype)?\n 3. Why you have to implement your own plan_tree_mutator and not\n using expression_tree_mutator?\n 4. As far as I understand you now always try to replace SeqScan with\n your custom vectorized scan. But it makes sense only if there are\n quals for this scan or aggregation is performed.\n In other cases batch+unbatch just adds extra overhead, doesn't it?\n 5. Throwing and catching exception for queries which can not be\n vectorized seems to be not the safest and most efficient way of\n handling such cases. \n May be it is better to return error code in plan_tree_mutator and\n propagate this error upstairs? \n 6. Have you experimented with different batch size? I have done\n similar experiments in VOPS and find out that tile size larger than\n 128 are not providing noticable increase of performance.\n You are currently using batch size 1024 which is significantly\n larger than typical amount of tuples on one page. \n 7. How vectorized scan can be combined with parallel execution (it\n is already supported in9.6, isn't it?)\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 3 Dec 2019 12:58:24 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Thanks Konstantin for your detailed review!\n\nOn Tue, Dec 3, 2019 at 5:58 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 02.12.2019 4:15, Hubert Zhang wrote:\n>\n>\n> The prototype extension is at https://github.com/zhangh43/vectorize_engine\n>\n>\n> I am very sorry, that I have no followed this link.\n> Few questions concerning your design decisions:\n>\n> 1. Will it be more efficient to use native arrays in vtype instead of\n> array of Datum? I think it will allow compiler to generate more efficient\n> code for operations with float4 and int32 types.\n> It is possible to use union to keep fixed size of vtype.\n\n\nYes, I'm also considering that when scan a column store, the column batch\nis loaded into a continuous memory region. For int32, the size of this\nregion is 4*BATCHSIZE, while for int16, the size is 2*BATCHSIZE. So using\nnative array could just do a single memcpy to fill the vtype batch.\n\n\n> 2. Why VectorTupleSlot contains array (batch) of heap tuples rather than\n> vectors (array of vtype)?\n>\n\na. VectorTupleSlot stores array of vtype in tts_values field which is used\nto reduce the code change and reuse functions like ExecProject. Of course\nwe could use separate field to store vtypes.\nb. VectorTupleSlot also contains array of heap tuples. This used to do heap\ntuple deform. In fact, the tuples in a batch may across many pages, so we\nalso need to pin an array of related pages instead of just one page.\n\n3. Why you have to implement your own plan_tree_mutator and not using\n> expression_tree_mutator?\n>\n\nI also want to replace plan node, e.g. Agg->CustomScan(with VectorAgg\nimplementation). expression_tree_mutator cannot be used to mutate plan node\nsuch as Agg, am I right?\n\n\n> 4. As far as I understand you now always try to replace SeqScan with your\n> custom vectorized scan. But it makes sense only if there are quals for this\n> scan or aggregation is performed.\n> In other cases batch+unbatch just adds extra overhead, doesn't it?\n>\nProbably extra overhead for heap format and query like 'select i from t;'\nwithout qual, projection, aggregation.\nBut with column store, VectorScan could directly read batch, and no\nadditional batch cost. Column store is the better choice for OLAP queries.\nCan we conclude that it would be better to use vector engine for OLAP\nqueries and row engine for OLTP queries.\n\n5. Throwing and catching exception for queries which can not be vectorized\n> seems to be not the safest and most efficient way of handling such cases.\n> May be it is better to return error code in plan_tree_mutator and\n> propagate this error upstairs?\n\n\nYes, as for efficiency, another way is to enable some plan node to be\nvectorized and leave other nodes not vectorized and add batch/unbatch layer\nbetween them(Is this what you said \"propagate this error upstairs\"). As you\nmentioned, this could introduce additional overhead. Is there any other\ngood approaches?\nWhat do you mean by not safest? PG catch will receive the ERROR, and\nfallback to the original non-vectorized plan.\n\n\n> 6. Have you experimented with different batch size? I have done similar\n> experiments in VOPS and find out that tile size larger than 128 are not\n> providing noticable increase of performance.\n> You are currently using batch size 1024 which is significantly larger than\n> typical amount of tuples on one page.\n>\n\nGood point, We will do some experiments on it.\n\n7. How vectorized scan can be combined with parallel execution (it is\n> already supported in9.6, isn't it?)\n>\n\nWe didn't implement it yet. But the idea is the same as non parallel one.\nCopy the current parallel scan and implement vectorized Gather, keeping\ntheir interface to be VectorTupleTableSlot.\nOur basic idea to reuse most of the current PG executor logic, and make\nthem vectorized, then tuning performance gradually.\n\n-- \nThanks\n\nHubert Zhang\n\nThanks Konstantin for your detailed review!On Tue, Dec 3, 2019 at 5:58 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 02.12.2019 4:15, Hubert Zhang wrote:\n\n\n\n\nThe prototype extension is at https://github.com/zhangh43/vectorize_engine\n\n\n\n\n I am very sorry, that I have no followed this link. \n Few questions concerning your design decisions:\n\n 1. Will it be more efficient to use native arrays in vtype instead\n of array of Datum? I think it will allow compiler to generate more\n efficient code for operations with float4 and int32 types.It is possible to use union to keep fixed size of vtype. Yes, I'm also considering that when scan a column store, the column batch is loaded into a continuous memory region. For int32, the size of this region is 4*BATCHSIZE, while for int16, the size is 2*BATCHSIZE. So using native array could just do a single memcpy to fill the vtype batch. 2. Why VectorTupleSlot contains array (batch) of heap tuples rather\n than vectors (array of vtype)?a. VectorTupleSlot stores array of vtype in tts_values field which is used to reduce the code change and reuse functions like ExecProject. Of course we could use separate field to store vtypes.b. VectorTupleSlot also contains array of heap tuples. This used to do heap tuple deform. In fact, the tuples in a batch may across many pages, so we also need to pin an array of related pages instead of just one page.\n 3. Why you have to implement your own plan_tree_mutator and not\n using expression_tree_mutator?I also want to replace plan node, e.g. Agg->CustomScan(with VectorAgg implementation). expression_tree_mutator cannot be used to mutate plan node such as Agg, am I right? \n 4. As far as I understand you now always try to replace SeqScan with\n your custom vectorized scan. But it makes sense only if there are\n quals for this scan or aggregation is performed.\n In other cases batch+unbatch just adds extra overhead, doesn't it?Probably extra overhead for heap format and query like 'select i from t;' without qual, projection, aggregation.But with column store, VectorScan could directly read batch, and no additional batch cost. Column store is the better choice for OLAP queries.Can we conclude that it would be better to use vector engine for OLAP queries and row engine for OLTP queries.\n 5. Throwing and catching exception for queries which can not be\n vectorized seems to be not the safest and most efficient way of\n handling such cases.May be it is better to return error code in plan_tree_mutator and\n propagate this error upstairs? Yes, as for efficiency, another way is to enable some plan node to be vectorized and leave other nodes not vectorized and add batch/unbatch layer between them(Is this what you said \"propagate this error upstairs\"). As you mentioned, this could introduce additional overhead. Is there any other good approaches?What do you mean by not safest? PG catch will receive the ERROR, and fallback to the original non-vectorized plan.\n\n 6. Have you experimented with different batch size? I have done\n similar experiments in VOPS and find out that tile size larger than\n 128 are not providing noticable increase of performance.\n You are currently using batch size 1024 which is significantly\n larger than typical amount of tuples on one page. Good point, We will do some experiments on it. \n 7. How vectorized scan can be combined with parallel execution (it\n is already supported in9.6, isn't it?)\n\nWe didn't implement it yet. But the idea is the same as non parallel one. Copy the current parallel scan and implement vectorized Gather, keeping their interface to be VectorTupleTableSlot.Our basic idea to reuse most of the current PG executor logic, and make them vectorized, then tuning performance gradually.-- ThanksHubert Zhang",
"msg_date": "Wed, 4 Dec 2019 17:13:57 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On 04.12.2019 12:13, Hubert Zhang wrote:\n> 3. Why you have to implement your own plan_tree_mutator and not using \n> expression_tree_mutator?\n>\n> I also want to replace plan node, e.g. Agg->CustomScan(with VectorAgg \n> implementation). expression_tree_mutator cannot be used to mutate plan \n> node such as Agg, am I right?\n\nO, sorry, I see.\n>\n> 4. As far as I understand you now always try to replace SeqScan\n> with your custom vectorized scan. But it makes sense only if there\n> are quals for this scan or aggregation is performed.\n> In other cases batch+unbatch just adds extra overhead, doesn't it?\n>\n> Probably extra overhead for heap format and query like 'select i from \n> t;' without qual, projection, aggregation.\n> But with column store, VectorScan could directly read batch, and no \n> additional batch cost. Column store is the better choice for OLAP queries.\n\nGenerally, yes.\nBut will it be true for the query with a lot of joins?\n\nselect * from T1 join T2 on (T1.pk=T2.fk) join T3 on (T2.pk=T3.fk) join \nT4 ...\n\nHow can batching improve performance in this case?\nAlso if query contains LIMIT clause or cursors, then batching can cause \nfetching of useless records (which never will be requested by client).\n\n> Can we conclude that it would be better to use vector engine for OLAP \n> queries and row engine for OLTP queries.\n>\n> 5. Throwing and catching exception for queries which can not be\n> vectorized seems to be not the safest and most efficient way of\n> handling such cases.\n> May be it is better to return error code in plan_tree_mutator and\n> propagate this error upstairs? \n>\n> Yes, as for efficiency, another way is to enable some plan node to be \n> vectorized and leave other nodes not vectorized and add batch/unbatch \n> layer between them(Is this what you said \"propagate this error \n> upstairs\"). As you mentioned, this could introduce additional \n> overhead. Is there any other good approaches?\n> What do you mean by not safest?\n> PG catch will receive the ERROR, and fallback to the original \n> non-vectorized plan.\n\nThe problem with catching and ignoring exception was many times \ndiscussed in hackers.\nUnfortunately Postgres PG_TRY/PG_CATCH mechanism is not analog of \nexception mechanism in more high level languages, like C++, Java...\nIt doesn't perform stack unwind. If some resources (files, locks, \nmemory,...) were obtained before throwing error, then them are not \nreclaimed.\nOnly rollback of transaction is guaranteed to release all resources. And \nit actually happen in case of normal error processing.\nBut if you catch and ignore exception , trying to continue execution, \nthen it can cause many problems.\n\nMay be in your case it is not a problem, because you know for sure where \nerror can happen: it is thrown by plan_tree_mutator\nand looks like there are no resources obtained by this function. But in \nany case overhead of setjmp is much higher than of explicit checks of \nreturn code.\nSo checking return codes will not actually add some noticeable overhead \nexcept code complication by adding extra checks.\nBut in can be hidden in macros which are used in any case (like MUTATE).\n>\n> 7. How vectorized scan can be combined with parallel execution (it\n> is already supported in9.6, isn't it?)\n>\n>\n> We didn't implement it yet. But the idea is the same as non parallel \n> one. Copy the current parallel scan and implement vectorized Gather, \n> keeping their interface to be VectorTupleTableSlot.\n> Our basic idea to reuse most of the current PG executor logic, and \n> make them vectorized, then tuning performance gradually.\n\nParallel scan is scattering pages between parallel workers.\nTo fill VectorTupleSlot with data you may need more than one page \n(unless you make a decision that it can fetch tuples only from single page).\nSo it should be somehow take in account specific of parallel search.\nAlso there is special nodes for parallel search so if we want to provide \nparallel execution for vectorized operations we need also to substitute \nthis nodes with\ncustom nodes.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 04.12.2019 12:13, Hubert Zhang\n wrote:\n\n\n\n\n3. Why you have to\n implement your own plan_tree_mutator and not using\n expression_tree_mutator?\n\n\n\n\nI also want to replace plan node, e.g.\n Agg->CustomScan(with VectorAgg\n implementation). expression_tree_mutator cannot be used to\n mutate plan node such as Agg, am I right?\n\n\n\n\n O, sorry, I see.\n\n\n\n \n\n 4. As far as I\n understand you now always try to replace SeqScan with\n your custom vectorized scan. But it makes sense only if\n there are quals for this scan or aggregation is\n performed.\n In other cases batch+unbatch just adds extra overhead,\n doesn't it?\n\n\nProbably extra overhead for heap format and query like\n 'select i from t;' without qual, projection, aggregation.\nBut with column store, VectorScan could directly read\n batch, and no additional batch cost. Column store is the\n better choice for OLAP queries.\n\n\n\n\n Generally, yes.\n But will it be true for the query with a lot of joins?\n\n select * from T1 join T2 on (T1.pk=T2.fk) join T3 on (T2.pk=T3.fk)\n join T4 ...\n\n How can batching improve performance in this case? \n Also if query contains LIMIT clause or cursors, then batching can\n cause fetching of useless records (which never will be requested by\n client).\n\n\n\n\nCan we conclude that it would be better to use vector\n engine for OLAP queries and row engine for OLTP queries.\n\n\n\n 5. Throwing\n and catching exception for queries which can not be\n vectorized seems to be not the safest and most efficient\n way of handling such cases.\nMay be it is better to return error\n code in plan_tree_mutator and propagate this error\n upstairs? \n \nYes, as for efficiency, another way is to enable some\n plan node to be vectorized and leave other nodes not\n vectorized and add batch/unbatch layer between them(Is this\n what you said \"propagate this error upstairs\"). As you\n mentioned, this could introduce additional overhead. Is\n there any other good approaches?\nWhat do you mean by not safest?\n PG catch will receive the ERROR, and fallback to the\n original non-vectorized plan.\n\n\n\n\n\n The problem with catching and ignoring exception was many times\n discussed in hackers. \n Unfortunately Postgres PG_TRY/PG_CATCH mechanism is not analog of\n exception mechanism in more high level languages, like C++, Java...\n It doesn't perform stack unwind. If some resources (files, locks,\n memory,...) were obtained before throwing error, then them are not\n reclaimed.\n Only rollback of transaction is guaranteed to release all resources.\n And it actually happen in case of normal error processing.\n But if you catch and ignore exception , trying to continue\n execution, then it can cause many problems.\n\n May be in your case it is not a problem, because you know for sure\n where error can happen: it is thrown by plan_tree_mutator\nand looks like there are no resources obtained by this\n function. But in any case overhead of setjmp is much higher than of\n explicit checks of return code.\n So checking return codes will not actually add some noticeable\n overhead except code complication by adding extra checks.\n But in can be hidden in macros which are used in any case (like\n MUTATE).\n\n\n\n\n 7. How\n vectorized scan can be combined with parallel execution\n (it is already supported in9.6, isn't it?)\n\n\n\n\n\nWe didn't implement it yet. But the idea is the same as non\n parallel one. Copy the current parallel scan and implement\n vectorized Gather, keeping their interface to be\n VectorTupleTableSlot.\nOur basic idea to reuse most of the current PG executor\n logic, and make them vectorized, then tuning performance\n gradually.\n\n\n\n Parallel scan is scattering pages between parallel workers.\n To fill VectorTupleSlot with data you may need more than one page\n (unless you make a decision that it can fetch tuples only from\n single page).\n So it should be somehow take in account specific of parallel search.\n Also there is special nodes for parallel search so if we want to\n provide parallel execution for vectorized operations we need also to\n substitute this nodes with \n custom nodes.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 4 Dec 2019 17:08:53 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Thanks Konstantin,\nYour suggestions are very helpful. I have added them into issues of\nvectorize_engine repo\nhttps://github.com/zhangh43/vectorize_engine/issues\n\nOn Wed, Dec 4, 2019 at 10:08 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 04.12.2019 12:13, Hubert Zhang wrote:\n>\n> 3. Why you have to implement your own plan_tree_mutator and not using\n> expression_tree_mutator?\n>\n> I also want to replace plan node, e.g. Agg->CustomScan(with VectorAgg\n> implementation). expression_tree_mutator cannot be used to mutate plan node\n> such as Agg, am I right?\n>\n>\n> O, sorry, I see.\n>\n>\n>\n>> 4. As far as I understand you now always try to replace SeqScan with your\n>> custom vectorized scan. But it makes sense only if there are quals for this\n>> scan or aggregation is performed.\n>> In other cases batch+unbatch just adds extra overhead, doesn't it?\n>>\n> Probably extra overhead for heap format and query like 'select i from t;'\n> without qual, projection, aggregation.\n> But with column store, VectorScan could directly read batch, and no\n> additional batch cost. Column store is the better choice for OLAP queries.\n>\n>\n> Generally, yes.\n> But will it be true for the query with a lot of joins?\n>\n> select * from T1 join T2 on (T1.pk=T2.fk) join T3 on (T2.pk=T3.fk) join T4\n> ...\n>\n> How can batching improve performance in this case?\n> Also if query contains LIMIT clause or cursors, then batching can cause\n> fetching of useless records (which never will be requested by client).\n>\n> Can we conclude that it would be better to use vector engine for OLAP\n> queries and row engine for OLTP queries.\n>\n> 5. Throwing and catching exception for queries which can not be vectorized\n>> seems to be not the safest and most efficient way of handling such cases.\n>> May be it is better to return error code in plan_tree_mutator and\n>> propagate this error upstairs?\n>\n>\n> Yes, as for efficiency, another way is to enable some plan node to be\n> vectorized and leave other nodes not vectorized and add batch/unbatch layer\n> between them(Is this what you said \"propagate this error upstairs\"). As you\n> mentioned, this could introduce additional overhead. Is there any other\n> good approaches?\n> What do you mean by not safest?\n> PG catch will receive the ERROR, and fallback to the original\n> non-vectorized plan.\n>\n>\n> The problem with catching and ignoring exception was many times discussed\n> in hackers.\n> Unfortunately Postgres PG_TRY/PG_CATCH mechanism is not analog of\n> exception mechanism in more high level languages, like C++, Java...\n> It doesn't perform stack unwind. If some resources (files, locks,\n> memory,...) were obtained before throwing error, then them are not\n> reclaimed.\n> Only rollback of transaction is guaranteed to release all resources. And\n> it actually happen in case of normal error processing.\n> But if you catch and ignore exception , trying to continue execution, then\n> it can cause many problems.\n>\n> May be in your case it is not a problem, because you know for sure where\n> error can happen: it is thrown by plan_tree_mutator\n> and looks like there are no resources obtained by this function. But in\n> any case overhead of setjmp is much higher than of explicit checks of\n> return code.\n> So checking return codes will not actually add some noticeable overhead\n> except code complication by adding extra checks.\n> But in can be hidden in macros which are used in any case (like MUTATE).\n>\n>\n> 7. How vectorized scan can be combined with parallel execution (it is\n>> already supported in9.6, isn't it?)\n>>\n>\n> We didn't implement it yet. But the idea is the same as non parallel one.\n> Copy the current parallel scan and implement vectorized Gather, keeping\n> their interface to be VectorTupleTableSlot.\n> Our basic idea to reuse most of the current PG executor logic, and make\n> them vectorized, then tuning performance gradually.\n>\n>\n> Parallel scan is scattering pages between parallel workers.\n> To fill VectorTupleSlot with data you may need more than one page (unless\n> you make a decision that it can fetch tuples only from single page).\n> So it should be somehow take in account specific of parallel search.\n> Also there is special nodes for parallel search so if we want to provide\n> parallel execution for vectorized operations we need also to substitute\n> this nodes with\n> custom nodes.\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.postgrespro.com&d=DwMDaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=lz-kpGdw_rtpgYV2ho3DjDSB5Psxis_b-3VZKON7K7c&m=vdzzVhvy3WXoHG6U6a8YqBZnVe-7lCDU5SzNWwPDxSM&s=0TXQmqH_G8_Nao7F_n5m-ekne2NfeaJJPCaRkH_4_ME&e=>\n> The Russian Postgres Company\n>\n>\n\n-- \nThanks\n\nHubert Zhang\n\nThanks Konstantin,Your suggestions are very helpful. I have added them into issues of vectorize_engine repohttps://github.com/zhangh43/vectorize_engine/issuesOn Wed, Dec 4, 2019 at 10:08 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 04.12.2019 12:13, Hubert Zhang\n wrote:\n\n\n\n3. Why you have to\n implement your own plan_tree_mutator and not using\n expression_tree_mutator?\n\n\n\n\nI also want to replace plan node, e.g.\n Agg->CustomScan(with VectorAgg\n implementation). expression_tree_mutator cannot be used to\n mutate plan node such as Agg, am I right?\n\n\n\n\n O, sorry, I see.\n\n\n\n \n\n 4. As far as I\n understand you now always try to replace SeqScan with\n your custom vectorized scan. But it makes sense only if\n there are quals for this scan or aggregation is\n performed.\n In other cases batch+unbatch just adds extra overhead,\n doesn't it?\n\n\nProbably extra overhead for heap format and query like\n 'select i from t;' without qual, projection, aggregation.\nBut with column store, VectorScan could directly read\n batch, and no additional batch cost. Column store is the\n better choice for OLAP queries.\n\n\n\n\n Generally, yes.\n But will it be true for the query with a lot of joins?\n\n select * from T1 join T2 on (T1.pk=T2.fk) join T3 on (T2.pk=T3.fk)\n join T4 ...\n\n How can batching improve performance in this case? \n Also if query contains LIMIT clause or cursors, then batching can\n cause fetching of useless records (which never will be requested by\n client).\n\n\n\n\nCan we conclude that it would be better to use vector\n engine for OLAP queries and row engine for OLTP queries.\n\n\n\n 5. Throwing\n and catching exception for queries which can not be\n vectorized seems to be not the safest and most efficient\n way of handling such cases.\nMay be it is better to return error\n code in plan_tree_mutator and propagate this error\n upstairs? \n \nYes, as for efficiency, another way is to enable some\n plan node to be vectorized and leave other nodes not\n vectorized and add batch/unbatch layer between them(Is this\n what you said \"propagate this error upstairs\"). As you\n mentioned, this could introduce additional overhead. Is\n there any other good approaches?\nWhat do you mean by not safest?\n PG catch will receive the ERROR, and fallback to the\n original non-vectorized plan.\n\n\n\n\n\n The problem with catching and ignoring exception was many times\n discussed in hackers. \n Unfortunately Postgres PG_TRY/PG_CATCH mechanism is not analog of\n exception mechanism in more high level languages, like C++, Java...\n It doesn't perform stack unwind. If some resources (files, locks,\n memory,...) were obtained before throwing error, then them are not\n reclaimed.\n Only rollback of transaction is guaranteed to release all resources.\n And it actually happen in case of normal error processing.\n But if you catch and ignore exception , trying to continue\n execution, then it can cause many problems.\n\n May be in your case it is not a problem, because you know for sure\n where error can happen: it is thrown by plan_tree_mutator\nand looks like there are no resources obtained by this\n function. But in any case overhead of setjmp is much higher than of\n explicit checks of return code.\n So checking return codes will not actually add some noticeable\n overhead except code complication by adding extra checks.\n But in can be hidden in macros which are used in any case (like\n MUTATE).\n\n\n\n\n 7. How\n vectorized scan can be combined with parallel execution\n (it is already supported in9.6, isn't it?)\n\n\n\n\n\nWe didn't implement it yet. But the idea is the same as non\n parallel one. Copy the current parallel scan and implement\n vectorized Gather, keeping their interface to be\n VectorTupleTableSlot.\nOur basic idea to reuse most of the current PG executor\n logic, and make them vectorized, then tuning performance\n gradually.\n\n\n\n Parallel scan is scattering pages between parallel workers.\n To fill VectorTupleSlot with data you may need more than one page\n (unless you make a decision that it can fetch tuples only from\n single page).\n So it should be somehow take in account specific of parallel search.\n Also there is special nodes for parallel search so if we want to\n provide parallel execution for vectorized operations we need also to\n substitute this nodes with \n custom nodes.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company \n\n-- ThanksHubert Zhang",
"msg_date": "Mon, 9 Dec 2019 10:52:08 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "I have done some performance comparisons.\nFirst of all I failed to run vectorized version of Q1 with master branch \nof your repository and\nPG9_6_STABLE branch of Postgres:\n\nNOTICE: query can't be vectorized\nDETAIL: Non plain agg is not supported\n\nI have to switch to pg96 branch.\n\nResults (seconds) of Q1 execution are the following:\n\nmax_parallel_workers_per_gather\n\tPG9_6, enable_vectorize_engine=off\n\tPG9_6, enable_vectorize_engine=on\n\tmaster (jit=on)\n0\n\t36\n\t20\n\t10\n4\n\t10\n\t-\n\t5\n\n\n\nI failed to run parallel version of Q1 with enable_vectorize_engine=on \nbecause of the same error: \"Non plain agg is not supported\"\n\n\nSo looks like PG-13 provides significant advantages in OLAP queries \ncomparing with 9.6!\nDefinitely it doesn't mean that vectorized executor is not needed for \nnew version of Postgres.\nOnce been ported, I expect that it should provide comparable improvement \nof performance.\n\nBut in any case I think that vectorized executor makes sense only been \ncombine with columnar store.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n I have done some performance comparisons.\n First of all I failed to run vectorized version of Q1 with master\n branch of your repository and\n PG9_6_STABLE branch of Postgres:\n\n NOTICE: query can't be vectorized\n DETAIL: Non plain agg is not supported\n\n I have to switch to pg96 branch.\n\n Results (seconds) of Q1 execution are the following:\n\n\n\n\nmax_parallel_workers_per_gather\n\nPG9_6, enable_vectorize_engine=off\n\nPG9_6, enable_vectorize_engine=on\n\nmaster (jit=on)\n\n\n\n0\n\n36\n\n20\n\n10\n\n\n\n4\n\n10\n\n-\n\n5\n\n\n\n\n\n\n I failed to run parallel version of Q1 with\n enable_vectorize_engine=on because of the same error: \"Non plain agg\n is not supported\"\n\n\n So looks like PG-13 provides significant advantages in OLAP queries\n comparing with 9.6!\n Definitely it doesn't mean that vectorized executor is not needed\n for new version of Postgres.\n Once been ported, I expect that it should provide comparable \n improvement of performance.\n\n But in any case I think that vectorized executor makes sense only\n been combine with columnar store.\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 10 Feb 2020 20:20:27 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "po 10. 2. 2020 v 18:20 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n> I have done some performance comparisons.\n> First of all I failed to run vectorized version of Q1 with master branch\n> of your repository and\n> PG9_6_STABLE branch of Postgres:\n>\n> NOTICE: query can't be vectorized\n> DETAIL: Non plain agg is not supported\n>\n> I have to switch to pg96 branch.\n>\n> Results (seconds) of Q1 execution are the following:\n>\n> max_parallel_workers_per_gather\n> PG9_6, enable_vectorize_engine=off\n> PG9_6, enable_vectorize_engine=on\n> master (jit=on)\n> 0\n> 36\n> 20\n> 10\n> 4\n> 10\n> -\n> 5\n>\n>\n> I failed to run parallel version of Q1 with enable_vectorize_engine=on\n> because of the same error: \"Non plain agg is not supported\"\n>\n>\n> So looks like PG-13 provides significant advantages in OLAP queries\n> comparing with 9.6!\n> Definitely it doesn't mean that vectorized executor is not needed for new\n> version of Postgres.\n> Once been ported, I expect that it should provide comparable improvement\n> of performance.\n>\n> But in any case I think that vectorized executor makes sense only been\n> combine with columnar store.\n>\n\n+1\n\nPavel\n\n\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npo 10. 2. 2020 v 18:20 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n I have done some performance comparisons.\n First of all I failed to run vectorized version of Q1 with master\n branch of your repository and\n PG9_6_STABLE branch of Postgres:\n\n NOTICE: query can't be vectorized\n DETAIL: Non plain agg is not supported\n\n I have to switch to pg96 branch.\n\n Results (seconds) of Q1 execution are the following:\n\n\n\n\nmax_parallel_workers_per_gather\n\nPG9_6, enable_vectorize_engine=off\n\nPG9_6, enable_vectorize_engine=on\n\nmaster (jit=on)\n\n\n\n0\n\n36\n\n20\n\n10\n\n\n\n4\n\n10\n\n-\n\n5\n\n\n\n\n\n\n I failed to run parallel version of Q1 with\n enable_vectorize_engine=on because of the same error: \"Non plain agg\n is not supported\"\n\n\n So looks like PG-13 provides significant advantages in OLAP queries\n comparing with 9.6!\n Definitely it doesn't mean that vectorized executor is not needed\n for new version of Postgres.\n Once been ported, I expect that it should provide comparable \n improvement of performance.\n\n But in any case I think that vectorized executor makes sense only\n been combine with columnar store.+1Pavel\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 10 Feb 2020 18:32:30 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n> So looks like PG-13 provides significant advantages in OLAP queries\n> comparing with 9.6!\n> Definitely it doesn't mean that vectorized executor is not needed for new\n> version of Postgres.\n> Once been ported, I expect that it should provide comparable improvement\n> of performance.\n>\n> But in any case I think that vectorized executor makes sense only been\n> combine with columnar store.\n>\n\nThanks for the test. +1 on vectorize should be combine with columnar store.\nI think when we support this extension\non master, we could try the new zedstore.\nI'm not active on this work now, but will continue when I have time. Feel\nfree to join bring vops's feature into this extension.\n\nThanks\n\nHubert Zhang\n\nOn Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n So looks like PG-13 provides significant advantages in OLAP queries\n comparing with 9.6!\n Definitely it doesn't mean that vectorized executor is not needed\n for new version of Postgres.\n Once been ported, I expect that it should provide comparable \n improvement of performance.\n\n But in any case I think that vectorized executor makes sense only\n been combine with columnar store.Thanks for the test. +1 on vectorize should be combine with columnar store. I think when we support this extensionon master, we could try the new zedstore.I'm not active on this work now, but will continue when I have time. Feel free to join bring vops's feature into this extension. ThanksHubert Zhang",
"msg_date": "Wed, 12 Feb 2020 18:12:36 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On 12.02.2020 13:12, Hubert Zhang wrote:\n> On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n> So looks like PG-13 provides significant advantages in OLAP\n> queries comparing with 9.6!\n> Definitely it doesn't mean that vectorized executor is not needed\n> for new version of Postgres.\n> Once been ported, I expect that it should provide comparable \n> improvement of performance.\n>\n> But in any case I think that vectorized executor makes sense only\n> been combine with columnar store.\n>\n>\n> Thanks for the test. +1 on vectorize should be combine with columnar \n> store. I think when we support this extension\n> on master, we could try the new zedstore.\n> I'm not active on this work now, but will continue when I have time. \n> Feel free to join bring vops's feature into this extension.\n> Thanks\n>\n> Hubert Zhang\n\nI have ported vectorize_engine to the master.\nIt takes longer than I expected: a lot of things were changed in executor.\n\nResults are the following:\n\n\npar.warkers\n\tPG9_6\nvectorize=off\n\tPG9_6\nvectorize=on\n\tmaster\nvectorize=off\njit=on\n\tmaster\nvectorize=off\njit=off \tmaster\nvectorize=on\njit=ofn \tmaster\nvectorize=on\njit=off\n0\n\t36\n\t20\n\t16\n\t25.5\n\t15\n\t17.5\n4\n\t10\n\t-\n\t5 \t7\n\t-\n\t-\n\n\nSo it proves the theory that JIT provides almost the same speedup as \nvector executor (both eliminates interpretation overhead but in \ndifferent way).\nI still not sure that we need vectorized executor: because with standard \nheap it provides almost no improvements comparing with current JIT version.\nBut in any case I am going to test it with vertical storage (zedstore or \ncstore).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 12.02.2020 13:12, Hubert Zhang\n wrote:\n\n\n\n\nOn Tue, Feb 11, 2020 at\n 1:20 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n\n So looks like PG-13 provides significant advantages in\n OLAP queries comparing with 9.6!\n Definitely it doesn't mean that vectorized executor is\n not needed for new version of Postgres.\n Once been ported, I expect that it should provide\n comparable improvement of performance.\n\n But in any case I think that vectorized executor makes\n sense only been combine with columnar store.\n\n\n\nThanks for the test. +1 on vectorize should be combine\n with columnar store. I think when we support this extension\non master, we could try the new zedstore.\nI'm not active on this work now, but will continue when I\n have time. Feel free to join bring vops's feature into this\n extension.\n \n\n\nThanks\n \n\nHubert Zhang\n\n\n\n\n\n I have ported vectorize_engine to the master.\n It takes longer than I expected: a lot of things were changed in\n executor.\n\n Results are the following:\n\n\n\n\n\npar.warkers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=ofn\nmaster\n vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n\n\n\n\n\n So it proves the theory that JIT provides almost the same speedup as\n vector executor (both eliminates interpretation overhead but in\n different way).\n I still not sure that we need vectorized executor: because with\n standard heap it provides almost no improvements comparing with\n current JIT version.\n But in any case I am going to test it with vertical storage\n (zedstore or cstore).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 21 Feb 2020 19:58:08 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Hi\n\nOn Sat, Feb 22, 2020 at 12:58 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 12.02.2020 13:12, Hubert Zhang wrote:\n>\n> On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> wrote:\n>\n>>\n>> So looks like PG-13 provides significant advantages in OLAP queries\n>> comparing with 9.6!\n>> Definitely it doesn't mean that vectorized executor is not needed for new\n>> version of Postgres.\n>> Once been ported, I expect that it should provide comparable improvement\n>> of performance.\n>>\n>> But in any case I think that vectorized executor makes sense only been\n>> combine with columnar store.\n>>\n>\n> Thanks for the test. +1 on vectorize should be combine with columnar\n> store. I think when we support this extension\n> on master, we could try the new zedstore.\n> I'm not active on this work now, but will continue when I have time. Feel\n> free to join bring vops's feature into this extension.\n>\n> Thanks\n>\n> Hubert Zhang\n>\n>\n> I have ported vectorize_engine to the master.\n> It takes longer than I expected: a lot of things were changed in executor.\n>\n> Results are the following:\n>\n>\n> par.warkers\n> PG9_6\n> vectorize=off\n> PG9_6\n> vectorize=on\n> master\n> vectorize=off\n> jit=on\n> master\n> vectorize=off\n> jit=off master\n> vectorize=on\n> jit=ofn master\n> vectorize=on\n> jit=off\n> 0\n> 36\n> 20\n> 16\n> 25.5\n> 15\n> 17.5\n> 4\n> 10\n> -\n> 5 7\n> -\n> -\n>\n> So it proves the theory that JIT provides almost the same speedup as\n> vector executor (both eliminates interpretation overhead but in different\n> way).\n> I still not sure that we need vectorized executor: because with standard\n> heap it provides almost no improvements comparing with current JIT version.\n> But in any case I am going to test it with vertical storage (zedstore or\n> cstore).\n>\n>\n>\nThanks for the porting and testing.\nYes, PG master and 9.6 have many changes, not only executor, but also\ntupletableslot interface.\n\nWhat matters the performance of JIT and Vectorization is its\nimplementation. This is just the beginning of vectorization work, just as\nyour vops extension reported, vectorization could run 10 times faster in\nPG. With the overhead of row storage(heap), we may not reach that speedup,\nbut I think we could do better. Also +1 on vertical storage.\n\nBTW, welcome to submit your PR for the PG master version.\n\n\n-- \nThanks\n\nHubert Zhang\n\nHiOn Sat, Feb 22, 2020 at 12:58 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 12.02.2020 13:12, Hubert Zhang\n wrote:\n\n\n\nOn Tue, Feb 11, 2020 at\n 1:20 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n\n So looks like PG-13 provides significant advantages in\n OLAP queries comparing with 9.6!\n Definitely it doesn't mean that vectorized executor is\n not needed for new version of Postgres.\n Once been ported, I expect that it should provide\n comparable improvement of performance.\n\n But in any case I think that vectorized executor makes\n sense only been combine with columnar store.\n\n\n\nThanks for the test. +1 on vectorize should be combine\n with columnar store. I think when we support this extension\non master, we could try the new zedstore.\nI'm not active on this work now, but will continue when I\n have time. Feel free to join bring vops's feature into this\n extension.\n \n\n\nThanks\n \n\nHubert Zhang\n\n\n\n\n\n I have ported vectorize_engine to the master.\n It takes longer than I expected: a lot of things were changed in\n executor.\n\n Results are the following:\n\n\n\n\n\npar.warkers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=ofn\nmaster\n vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n\n\n\n\n\n So it proves the theory that JIT provides almost the same speedup as\n vector executor (both eliminates interpretation overhead but in\n different way).\n I still not sure that we need vectorized executor: because with\n standard heap it provides almost no improvements comparing with\n current JIT version.\n But in any case I am going to test it with vertical storage\n (zedstore or cstore).\n\nThanks for the porting and testing.Yes, PG master and 9.6 have many changes, not only executor, but also tupletableslot interface.What matters the performance of JIT and Vectorization is its implementation. This is just the beginning of vectorization work, just as your vops extension reported, vectorization could run 10 times faster in PG. With the overhead of row storage(heap), we may not reach that speedup, but I think we could do better. Also +1 on vertical storage.BTW, welcome to submit your PR for the PG master version.-- ThanksHubert Zhang",
"msg_date": "Mon, 24 Feb 2020 10:08:15 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On 24.02.2020 05:08, Hubert Zhang wrote:\n> Hi\n>\n> On Sat, Feb 22, 2020 at 12:58 AM Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n>\n> On 12.02.2020 13:12, Hubert Zhang wrote:\n>> On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>>\n>>\n>> So looks like PG-13 provides significant advantages in OLAP\n>> queries comparing with 9.6!\n>> Definitely it doesn't mean that vectorized executor is not\n>> needed for new version of Postgres.\n>> Once been ported, I expect that it should provide comparable \n>> improvement of performance.\n>>\n>> But in any case I think that vectorized executor makes sense\n>> only been combine with columnar store.\n>>\n>>\n>> Thanks for the test. +1 on vectorize should be combine with\n>> columnar store. I think when we support this extension\n>> on master, we could try the new zedstore.\n>> I'm not active on this work now, but will continue when I have\n>> time. Feel free to join bring vops's feature into this extension.\n>> Thanks\n>>\n>> Hubert Zhang\n>\n> I have ported vectorize_engine to the master.\n> It takes longer than I expected: a lot of things were changed in\n> executor.\n>\n> Results are the following:\n>\n>\n> par.warkers\n> \tPG9_6\n> vectorize=off\n> \tPG9_6\n> vectorize=on\n> \tmaster\n> vectorize=off\n> jit=on\n> \tmaster\n> vectorize=off\n> jit=off \tmaster\n> vectorize=on\n> jit=ofn \tmaster\n> vectorize=on\n> jit=off\n> 0\n> \t36\n> \t20\n> \t16\n> \t25.5\n> \t15\n> \t17.5\n> 4\n> \t10\n> \t-\n> \t5 \t7\n> \t-\n> \t-\n>\n>\n> So it proves the theory that JIT provides almost the same speedup\n> as vector executor (both eliminates interpretation overhead but in\n> different way).\n> I still not sure that we need vectorized executor: because with\n> standard heap it provides almost no improvements comparing with\n> current JIT version.\n> But in any case I am going to test it with vertical storage\n> (zedstore or cstore).\n>\n>\n> Thanks for the porting and testing.\n> Yes, PG master and 9.6 have many changes, not only executor, but also \n> tupletableslot interface.\n>\n> What matters the performance of JIT and Vectorization is its \n> implementation. This is just the beginning of vectorization work, just \n> as your vops extension reported, vectorization could run 10 times \n> faster in PG. With the overhead of row storage(heap), we may not reach \n> that speedup, but I think we could do better. Also +1 on vertical storage.\n>\n> BTW, welcome to submit your PR for the PG master version.\n\n\nSorry, but I have no permissions to push changes to your repository.\nI can certainly create my own fork of vectorize_engine, but I think it \nwill be beter if I push pg13 branch in your repository.\n\n\n\n\n\n\n\n\n\n\nOn 24.02.2020 05:08, Hubert Zhang\n wrote:\n\n\n\n\nHi\n\n\nOn Sat, Feb 22, 2020 at\n 12:58 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n \n\nOn 12.02.2020 13:12, Hubert Zhang wrote:\n\n\n\nOn Tue, Feb 11,\n 2020 at 1:20 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n\n So looks like PG-13 provides significant\n advantages in OLAP queries comparing with 9.6!\n Definitely it doesn't mean that vectorized\n executor is not needed for new version of\n Postgres.\n Once been ported, I expect that it should\n provide comparable improvement of\n performance.\n\n But in any case I think that vectorized\n executor makes sense only been combine with\n columnar store.\n\n\n\nThanks for the test. +1 on vectorize should be\n combine with columnar store. I think when we\n support this extension\non master, we could try the new zedstore.\nI'm not active on this work now, but will\n continue when I have time. Feel free to join bring\n vops's feature into this extension.\n \n\n\nThanks\n \n\nHubert Zhang\n\n\n\n\n\n I have ported vectorize_engine to the master.\n It takes longer than I expected: a lot of things were\n changed in executor.\n\n Results are the following:\n\n\n\n\n\npar.warkers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=ofn\nmaster\n vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n\n\n\n\n\n So it proves the theory that JIT provides almost the same\n speedup as vector executor (both eliminates interpretation\n overhead but in different way).\n I still not sure that we need vectorized executor: because\n with standard heap it provides almost no improvements\n comparing with current JIT version.\n But in any case I am going to test it with vertical\n storage (zedstore or cstore).\n\n\n\n\n\n\nThanks for the porting and testing.\nYes, PG master and 9.6 have many changes, not only\n executor, but also tupletableslot interface.\n\n\nWhat matters the performance of JIT and Vectorization is\n its implementation. This is just the beginning of\n vectorization work, just as your vops extension reported,\n vectorization could run 10 times faster in PG. With the\n overhead of row storage(heap), we may not reach that speedup,\n but I think we could do better. Also +1 on vertical storage.\n\n\nBTW, welcome to submit your PR for the PG master version.\n\n\n\n\n Sorry, but I have no permissions to push changes to your repository.\n I can certainly create my own fork of vectorize_engine, but I think\n it will be beter if I push pg13 branch in your repository.",
"msg_date": "Mon, 24 Feb 2020 15:02:44 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Hi Konstantin,\nI have added you as a collaborator on github. Please accepted and try again.\nI think non collaborator could also open pull requests.\n\nOn Mon, Feb 24, 2020 at 8:02 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 24.02.2020 05:08, Hubert Zhang wrote:\n>\n> Hi\n>\n> On Sat, Feb 22, 2020 at 12:58 AM Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> wrote:\n>\n>>\n>>\n>> On 12.02.2020 13:12, Hubert Zhang wrote:\n>>\n>> On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> wrote:\n>>\n>>>\n>>> So looks like PG-13 provides significant advantages in OLAP queries\n>>> comparing with 9.6!\n>>> Definitely it doesn't mean that vectorized executor is not needed for\n>>> new version of Postgres.\n>>> Once been ported, I expect that it should provide comparable\n>>> improvement of performance.\n>>>\n>>> But in any case I think that vectorized executor makes sense only been\n>>> combine with columnar store.\n>>>\n>>\n>> Thanks for the test. +1 on vectorize should be combine with columnar\n>> store. I think when we support this extension\n>> on master, we could try the new zedstore.\n>> I'm not active on this work now, but will continue when I have time. Feel\n>> free to join bring vops's feature into this extension.\n>>\n>> Thanks\n>>\n>> Hubert Zhang\n>>\n>>\n>> I have ported vectorize_engine to the master.\n>> It takes longer than I expected: a lot of things were changed in executor.\n>>\n>> Results are the following:\n>>\n>>\n>> par.warkers\n>> PG9_6\n>> vectorize=off\n>> PG9_6\n>> vectorize=on\n>> master\n>> vectorize=off\n>> jit=on\n>> master\n>> vectorize=off\n>> jit=off master\n>> vectorize=on\n>> jit=ofn master\n>> vectorize=on\n>> jit=off\n>> 0\n>> 36\n>> 20\n>> 16\n>> 25.5\n>> 15\n>> 17.5\n>> 4\n>> 10\n>> -\n>> 5 7\n>> -\n>> -\n>>\n>> So it proves the theory that JIT provides almost the same speedup as\n>> vector executor (both eliminates interpretation overhead but in different\n>> way).\n>> I still not sure that we need vectorized executor: because with standard\n>> heap it provides almost no improvements comparing with current JIT version.\n>> But in any case I am going to test it with vertical storage (zedstore or\n>> cstore).\n>>\n>>\n> Thanks for the porting and testing.\n> Yes, PG master and 9.6 have many changes, not only executor, but also\n> tupletableslot interface.\n>\n> What matters the performance of JIT and Vectorization is its\n> implementation. This is just the beginning of vectorization work, just as\n> your vops extension reported, vectorization could run 10 times faster in\n> PG. With the overhead of row storage(heap), we may not reach that speedup,\n> but I think we could do better. Also +1 on vertical storage.\n>\n> BTW, welcome to submit your PR for the PG master version.\n>\n>\n>\n> Sorry, but I have no permissions to push changes to your repository.\n> I can certainly create my own fork of vectorize_engine, but I think it\n> will be beter if I push pg13 branch in your repository.\n>\n>\n>\n\n-- \nThanks\n\nHubert Zhang\n\nHi Konstantin,I have added you as a collaborator on github. Please accepted and try again.I think non collaborator could also open pull requests.On Mon, Feb 24, 2020 at 8:02 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 24.02.2020 05:08, Hubert Zhang\n wrote:\n\n\n\nHi\n\n\nOn Sat, Feb 22, 2020 at\n 12:58 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n \n\nOn 12.02.2020 13:12, Hubert Zhang wrote:\n\n\n\nOn Tue, Feb 11,\n 2020 at 1:20 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n\n So looks like PG-13 provides significant\n advantages in OLAP queries comparing with 9.6!\n Definitely it doesn't mean that vectorized\n executor is not needed for new version of\n Postgres.\n Once been ported, I expect that it should\n provide comparable improvement of\n performance.\n\n But in any case I think that vectorized\n executor makes sense only been combine with\n columnar store.\n\n\n\nThanks for the test. +1 on vectorize should be\n combine with columnar store. I think when we\n support this extension\non master, we could try the new zedstore.\nI'm not active on this work now, but will\n continue when I have time. Feel free to join bring\n vops's feature into this extension.\n \n\n\nThanks\n \n\nHubert Zhang\n\n\n\n\n\n I have ported vectorize_engine to the master.\n It takes longer than I expected: a lot of things were\n changed in executor.\n\n Results are the following:\n\n\n\n\n\npar.warkers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=ofn\nmaster\n vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n\n\n\n\n\n So it proves the theory that JIT provides almost the same\n speedup as vector executor (both eliminates interpretation\n overhead but in different way).\n I still not sure that we need vectorized executor: because\n with standard heap it provides almost no improvements\n comparing with current JIT version.\n But in any case I am going to test it with vertical\n storage (zedstore or cstore).\n\n\n\n\n\nThanks for the porting and testing.\nYes, PG master and 9.6 have many changes, not only\n executor, but also tupletableslot interface.\n\n\nWhat matters the performance of JIT and Vectorization is\n its implementation. This is just the beginning of\n vectorization work, just as your vops extension reported,\n vectorization could run 10 times faster in PG. With the\n overhead of row storage(heap), we may not reach that speedup,\n but I think we could do better. Also +1 on vertical storage.\n\n\nBTW, welcome to submit your PR for the PG master version.\n\n\n\n\n Sorry, but I have no permissions to push changes to your repository.\n I can certainly create my own fork of vectorize_engine, but I think\n it will be beter if I push pg13 branch in your repository.\n\n\n\n-- ThanksHubert Zhang",
"msg_date": "Mon, 24 Feb 2020 20:43:39 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Hi Konstantin,\n\nI checkout your branch pg13 in repo\nhttps://github.com/zhangh43/vectorize_engine\nAfter I fixed some compile error, I tested Q1 on TPCH-10G\nThe result is different from yours and vectorize version is too slow. Note\nthat I disable parallel worker by default.\nno JIT no Vectorize: 36 secs\nwith JIT only: 23 secs\nwith Vectorize only: 33 secs\nJIT + Vectorize: 29 secs\n\nMy config option is `CFLAGS='-O3 -g -march=native'\n--prefix=/usr/local/pgsql/ --disable-cassert --enable-debug --with-llvm`\nI will do some spike on why vectorized is so slow. Could you please provide\nyour compile option and the TPCH dataset size and your queries(standard\nQ1?) to help me to debug on it.\n\nOn Mon, Feb 24, 2020 at 8:43 PM Hubert Zhang <hzhang@pivotal.io> wrote:\n\n> Hi Konstantin,\n> I have added you as a collaborator on github. Please accepted and try\n> again.\n> I think non collaborator could also open pull requests.\n>\n> On Mon, Feb 24, 2020 at 8:02 PM Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> wrote:\n>\n>>\n>>\n>> On 24.02.2020 05:08, Hubert Zhang wrote:\n>>\n>> Hi\n>>\n>> On Sat, Feb 22, 2020 at 12:58 AM Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> wrote:\n>>\n>>>\n>>>\n>>> On 12.02.2020 13:12, Hubert Zhang wrote:\n>>>\n>>> On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik <\n>>> k.knizhnik@postgrespro.ru> wrote:\n>>>\n>>>>\n>>>> So looks like PG-13 provides significant advantages in OLAP queries\n>>>> comparing with 9.6!\n>>>> Definitely it doesn't mean that vectorized executor is not needed for\n>>>> new version of Postgres.\n>>>> Once been ported, I expect that it should provide comparable\n>>>> improvement of performance.\n>>>>\n>>>> But in any case I think that vectorized executor makes sense only been\n>>>> combine with columnar store.\n>>>>\n>>>\n>>> Thanks for the test. +1 on vectorize should be combine with columnar\n>>> store. I think when we support this extension\n>>> on master, we could try the new zedstore.\n>>> I'm not active on this work now, but will continue when I have time.\n>>> Feel free to join bring vops's feature into this extension.\n>>>\n>>> Thanks\n>>>\n>>> Hubert Zhang\n>>>\n>>>\n>>> I have ported vectorize_engine to the master.\n>>> It takes longer than I expected: a lot of things were changed in\n>>> executor.\n>>>\n>>> Results are the following:\n>>>\n>>>\n>>> par.warkers\n>>> PG9_6\n>>> vectorize=off\n>>> PG9_6\n>>> vectorize=on\n>>> master\n>>> vectorize=off\n>>> jit=on\n>>> master\n>>> vectorize=off\n>>> jit=off master\n>>> vectorize=on\n>>> jit=ofn master\n>>> vectorize=on\n>>> jit=off\n>>> 0\n>>> 36\n>>> 20\n>>> 16\n>>> 25.5\n>>> 15\n>>> 17.5\n>>> 4\n>>> 10\n>>> -\n>>> 5 7\n>>> -\n>>> -\n>>>\n>>> So it proves the theory that JIT provides almost the same speedup as\n>>> vector executor (both eliminates interpretation overhead but in different\n>>> way).\n>>> I still not sure that we need vectorized executor: because with standard\n>>> heap it provides almost no improvements comparing with current JIT version.\n>>> But in any case I am going to test it with vertical storage (zedstore or\n>>> cstore).\n>>>\n>>>\n>> Thanks for the porting and testing.\n>> Yes, PG master and 9.6 have many changes, not only executor, but also\n>> tupletableslot interface.\n>>\n>> What matters the performance of JIT and Vectorization is its\n>> implementation. This is just the beginning of vectorization work, just as\n>> your vops extension reported, vectorization could run 10 times faster in\n>> PG. With the overhead of row storage(heap), we may not reach that speedup,\n>> but I think we could do better. Also +1 on vertical storage.\n>>\n>> BTW, welcome to submit your PR for the PG master version.\n>>\n>>\n>>\n>> Sorry, but I have no permissions to push changes to your repository.\n>> I can certainly create my own fork of vectorize_engine, but I think it\n>> will be beter if I push pg13 branch in your repository.\n>>\n>>\n>>\n>\n> --\n> Thanks\n>\n> Hubert Zhang\n>\n\n\n-- \nThanks\n\nHubert Zhang\n\nHi Konstantin,I checkout your branch pg13 in repo https://github.com/zhangh43/vectorize_engineAfter I fixed some compile error, I tested Q1 on TPCH-10GThe result is different from yours and vectorize version is too slow. Note that I disable parallel worker by default.no JIT no Vectorize: 36 secswith JIT only: 23 secswith Vectorize only: 33 secsJIT + Vectorize: 29 secsMy config option is `CFLAGS='-O3 -g -march=native' --prefix=/usr/local/pgsql/ --disable-cassert --enable-debug --with-llvm`I will do some spike on why vectorized is so slow. Could you please provide your compile option and the TPCH dataset size and your queries(standard Q1?) to help me to debug on it.On Mon, Feb 24, 2020 at 8:43 PM Hubert Zhang <hzhang@pivotal.io> wrote:Hi Konstantin,I have added you as a collaborator on github. Please accepted and try again.I think non collaborator could also open pull requests.On Mon, Feb 24, 2020 at 8:02 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 24.02.2020 05:08, Hubert Zhang\n wrote:\n\n\n\nHi\n\n\nOn Sat, Feb 22, 2020 at\n 12:58 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n \n\nOn 12.02.2020 13:12, Hubert Zhang wrote:\n\n\n\nOn Tue, Feb 11,\n 2020 at 1:20 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n\n So looks like PG-13 provides significant\n advantages in OLAP queries comparing with 9.6!\n Definitely it doesn't mean that vectorized\n executor is not needed for new version of\n Postgres.\n Once been ported, I expect that it should\n provide comparable improvement of\n performance.\n\n But in any case I think that vectorized\n executor makes sense only been combine with\n columnar store.\n\n\n\nThanks for the test. +1 on vectorize should be\n combine with columnar store. I think when we\n support this extension\non master, we could try the new zedstore.\nI'm not active on this work now, but will\n continue when I have time. Feel free to join bring\n vops's feature into this extension.\n \n\n\nThanks\n \n\nHubert Zhang\n\n\n\n\n\n I have ported vectorize_engine to the master.\n It takes longer than I expected: a lot of things were\n changed in executor.\n\n Results are the following:\n\n\n\n\n\npar.warkers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=ofn\nmaster\n vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n\n\n\n\n\n So it proves the theory that JIT provides almost the same\n speedup as vector executor (both eliminates interpretation\n overhead but in different way).\n I still not sure that we need vectorized executor: because\n with standard heap it provides almost no improvements\n comparing with current JIT version.\n But in any case I am going to test it with vertical\n storage (zedstore or cstore).\n\n\n\n\n\nThanks for the porting and testing.\nYes, PG master and 9.6 have many changes, not only\n executor, but also tupletableslot interface.\n\n\nWhat matters the performance of JIT and Vectorization is\n its implementation. This is just the beginning of\n vectorization work, just as your vops extension reported,\n vectorization could run 10 times faster in PG. With the\n overhead of row storage(heap), we may not reach that speedup,\n but I think we could do better. Also +1 on vertical storage.\n\n\nBTW, welcome to submit your PR for the PG master version.\n\n\n\n\n Sorry, but I have no permissions to push changes to your repository.\n I can certainly create my own fork of vectorize_engine, but I think\n it will be beter if I push pg13 branch in your repository.\n\n\n\n-- ThanksHubert Zhang\n-- ThanksHubert Zhang",
"msg_date": "Tue, 25 Feb 2020 16:06:11 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On 25.02.2020 11:06, Hubert Zhang wrote:\n> Hi Konstantin,\n>\n> I checkout your branch pg13 in repo \n> https://github.com/zhangh43/vectorize_engine\n> After I fixed some compile error, I tested Q1 on TPCH-10G\n> The result is different from yours and vectorize version is too slow. \n> Note that I disable parallel worker by default.\n> no JIT no Vectorize: 36 secs\n> with JIT only: 23 secs\n> with Vectorize only: 33 secs\n> JIT + Vectorize: 29 secs\n>\n> My config option is `CFLAGS='-O3 -g -march=native' \n> --prefix=/usr/local/pgsql/ --disable-cassert --enable-debug --with-llvm`\n> I will do some spike on why vectorized is so slow. Could you please \n> provide your compile option and the TPCH dataset size and your \n> queries(standard Q1?) to help me to debug on it.\n>\n\n\nHi, Hubert\n\nSorry, looks like I have used slightly deteriorated snapshot of master \nso I have not noticed some problems.\nFixes are committed.\n\nMost of the time is spent in unpacking heap tuple \n(tts_buffer_heap_getsomeattrs):\n\n 24.66% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 8.28% postgres vectorize_engine.so [.] VExecStoreColumns\n 5.94% postgres postgres [.] HeapTupleSatisfiesVisibility\n 4.21% postgres postgres [.] bpchareq\n 4.12% postgres vectorize_engine.so [.] vfloat8_accum\n\n\nIn my version of nodeSeqscan I do not keep all fetched 1024 heap tuples \nbut stored there attribute values in vector columns immediately.\nBut to avoid extraction of useless data it is necessary to know list of \nused columns.\nThe same problem is solved in zedstore, but unfortunately there is no \nexisted method in Postgres to get list\nof used attributes. I have done it but my last implementation contains \nerror which cause loading of all columns.\nFixed version is committed.\n\nNow profile without JIT is:\n\n 15.52% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 10.25% postgres postgres [.] ExecInterpExpr\n 6.54% postgres postgres [.] HeapTupleSatisfiesVisibility\n 5.12% postgres vectorize_engine.so [.] VExecStoreColumns\n 4.86% postgres postgres [.] bpchareq\n 4.80% postgres vectorize_engine.so [.] vfloat8_accum\n 3.78% postgres postgres [.] tts_minimal_getsomeattrs\n 3.66% postgres vectorize_engine.so [.] VExecAgg\n 3.38% postgres postgres [.] hashbpchar\n\nand with JIT:\n\n 13.88% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n 6.03% postgres postgres [.] HeapTupleSatisfiesVisibility\n 5.55% postgres postgres [.] bpchareq\n 4.42% postgres vectorize_engine.so [.] VExecStoreColumns\n 4.19% postgres postgres [.] hashbpchar\n 4.09% postgres vectorize_engine.so [.] vfloat8pl\n\n> On Mon, Feb 24, 2020 at 8:43 PM Hubert Zhang <hzhang@pivotal.io \n> <mailto:hzhang@pivotal.io>> wrote:\n>\n> Hi Konstantin,\n> I have added you as a collaborator on github. Please accepted and\n> try again.\n> I think non collaborator could also open pull requests.\n>\n> On Mon, Feb 24, 2020 at 8:02 PM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n>\n> On 24.02.2020 05:08, Hubert Zhang wrote:\n>> Hi\n>>\n>> On Sat, Feb 22, 2020 at 12:58 AM Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru\n>> <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>>\n>>\n>>\n>> On 12.02.2020 13:12, Hubert Zhang wrote:\n>>> On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik\n>>> <k.knizhnik@postgrespro.ru\n>>> <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>>>\n>>>\n>>> So looks like PG-13 provides significant advantages\n>>> in OLAP queries comparing with 9.6!\n>>> Definitely it doesn't mean that vectorized executor\n>>> is not needed for new version of Postgres.\n>>> Once been ported, I expect that it should provide\n>>> comparable improvement of performance.\n>>>\n>>> But in any case I think that vectorized executor\n>>> makes sense only been combine with columnar store.\n>>>\n>>>\n>>> Thanks for the test. +1 on vectorize should be combine\n>>> with columnar store. I think when we support this extension\n>>> on master, we could try the new zedstore.\n>>> I'm not active on this work now, but will continue when\n>>> I have time. Feel free to join bring vops's feature into\n>>> this extension.\n>>> Thanks\n>>>\n>>> Hubert Zhang\n>>\n>> I have ported vectorize_engine to the master.\n>> It takes longer than I expected: a lot of things were\n>> changed in executor.\n>>\n>> Results are the following:\n>>\n>>\n>> par.warkers\n>> \tPG9_6\n>> vectorize=off\n>> \tPG9_6\n>> vectorize=on\n>> \tmaster\n>> vectorize=off\n>> jit=on\n>> \tmaster\n>> vectorize=off\n>> jit=off \tmaster\n>> vectorize=on\n>> jit=ofn \tmaster\n>> vectorize=on\n>> jit=off\n>> 0\n>> \t36\n>> \t20\n>> \t16\n>> \t25.5\n>> \t15\n>> \t17.5\n>> 4\n>> \t10\n>> \t-\n>> \t5 \t7\n>> \t-\n>> \t-\n>>\n>>\n>> So it proves the theory that JIT provides almost the same\n>> speedup as vector executor (both eliminates\n>> interpretation overhead but in different way).\n>> I still not sure that we need vectorized executor:\n>> because with standard heap it provides almost no\n>> improvements comparing with current JIT version.\n>> But in any case I am going to test it with vertical\n>> storage (zedstore or cstore).\n>>\n>>\n>> Thanks for the porting and testing.\n>> Yes, PG master and 9.6 have many changes, not only executor,\n>> but also tupletableslot interface.\n>>\n>> What matters the performance of JIT and Vectorization is its\n>> implementation. This is just the beginning of vectorization\n>> work, just as your vops extension reported, vectorization\n>> could run 10 times faster in PG. With the overhead of row\n>> storage(heap), we may not reach that speedup, but I think we\n>> could do better. Also +1 on vertical storage.\n>>\n>> BTW, welcome to submit your PR for the PG master version.\n>\n>\n> Sorry, but I have no permissions to push changes to your\n> repository.\n> I can certainly create my own fork of vectorize_engine, but I\n> think it will be beter if I push pg13 branch in your repository.\n>\n>\n>\n>\n> -- \n> Thanks\n>\n> Hubert Zhang\n>\n>\n>\n> -- \n> Thanks\n>\n> Hubert Zhang\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 25.02.2020 11:06, Hubert Zhang\n wrote:\n\n\n\nHi Konstantin,\n \n\nI checkout your branch pg13 in repo https://github.com/zhangh43/vectorize_engine\nAfter I fixed some compile error, I tested Q1 on TPCH-10G\nThe result is different from yours and vectorize version is\n too slow. Note that I disable parallel worker by default.\nno JIT no Vectorize: 36 secs\nwith JIT only: 23 secs\nwith Vectorize only: 33 secs\nJIT + Vectorize: 29 secs\n\n\nMy config option is `CFLAGS='-O3 -g -march=native'\n --prefix=/usr/local/pgsql/ --disable-cassert --enable-debug\n --with-llvm`\nI will do some spike on why vectorized is so slow. Could\n you please provide your compile option and the TPCH dataset\n size and your queries(standard Q1?) to help me to debug on it.\n\n\n\n\n\n Hi, Hubert\n\n Sorry, looks like I have used slightly deteriorated snapshot of\n master so I have not noticed some problems.\n Fixes are committed.\n\n Most of the time is spent in unpacking heap tuple\n (tts_buffer_heap_getsomeattrs):\n\n 24.66% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 8.28% postgres vectorize_engine.so [.] VExecStoreColumns\n 5.94% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 4.21% postgres postgres [.] bpchareq\n 4.12% postgres vectorize_engine.so [.] vfloat8_accum\n\n\n In my version of nodeSeqscan I do not keep all fetched 1024 heap\n tuples but stored there attribute values in vector columns\n immediately.\n But to avoid extraction of useless data it is necessary to know list\n of used columns.\n The same problem is solved in zedstore, but unfortunately there is\n no existed method in Postgres to get list \n of used attributes. I have done it but my last implementation\n contains error which cause loading of all columns.\n Fixed version is committed.\n\n Now profile without JIT is:\n\n 15.52% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 10.25% postgres postgres [.] ExecInterpExpr\n 6.54% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 5.12% postgres vectorize_engine.so [.] VExecStoreColumns\n 4.86% postgres postgres [.] bpchareq\n 4.80% postgres vectorize_engine.so [.] vfloat8_accum\n 3.78% postgres postgres [.]\n tts_minimal_getsomeattrs\n 3.66% postgres vectorize_engine.so [.] VExecAgg\n 3.38% postgres postgres [.] hashbpchar\n\n and with JIT:\n\n 13.88% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n 6.03% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 5.55% postgres postgres [.] bpchareq\n 4.42% postgres vectorize_engine.so [.] VExecStoreColumns\n 4.19% postgres postgres [.] hashbpchar\n 4.09% postgres vectorize_engine.so [.] vfloat8pl\n\n\n\nOn Mon, Feb 24, 2020 at 8:43\n PM Hubert Zhang <hzhang@pivotal.io> wrote:\n\n\n\nHi Konstantin,\n I have added you as a collaborator on github. Please\n accepted and try again.\n I think non collaborator could also open pull requests.\n\n\n\nOn Mon, Feb 24, 2020 at\n 8:02 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n \n\nOn 24.02.2020 05:08, Hubert Zhang wrote:\n\n\n\nHi\n\n\nOn Sat, Feb 22,\n 2020 at 12:58 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n \n\nOn 12.02.2020 13:12, Hubert Zhang wrote:\n\n\n\nOn\n Tue, Feb 11, 2020 at 1:20 AM\n Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n\n So looks like PG-13 provides\n significant advantages in OLAP\n queries comparing with 9.6!\n Definitely it doesn't mean that\n vectorized executor is not needed\n for new version of Postgres.\n Once been ported, I expect that it\n should provide comparable \n improvement of performance.\n\n But in any case I think that\n vectorized executor makes sense\n only been combine with columnar\n store.\n\n\n\nThanks for the test. +1 on\n vectorize should be combine with\n columnar store. I think when we\n support this extension\non master, we could try the new\n zedstore.\nI'm not active on this work now,\n but will continue when I have time.\n Feel free to join bring vops's feature\n into this extension.\n \n\n\nThanks\n \n\nHubert Zhang\n\n\n\n\n\n I have ported vectorize_engine to the master.\n It takes longer than I expected: a lot of\n things were changed in executor.\n\n Results are the following:\n\n\n\n\n\npar.warkers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=ofn\nmaster\n vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n\n\n\n\n\n So it proves the theory that JIT provides\n almost the same speedup as vector executor\n (both eliminates interpretation overhead but\n in different way).\n I still not sure that we need vectorized\n executor: because with standard heap it\n provides almost no improvements comparing with\n current JIT version.\n But in any case I am going to test it with\n vertical storage (zedstore or cstore).\n\n\n\n\nThanks for the porting and testing.\nYes, PG master and 9.6 have many changes, not\n only executor, but also tupletableslot interface.\n\n\nWhat matters the performance of JIT and\n Vectorization is its implementation. This is just\n the beginning of vectorization work, just as your\n vops extension reported, vectorization could run\n 10 times faster in PG. With the overhead of row\n storage(heap), we may not reach that speedup, but\n I think we could do better. Also +1 on vertical\n storage.\n\n\nBTW, welcome to submit your PR for the PG\n master version.\n\n\n\n\n Sorry, but I have no permissions to push changes to your\n repository.\n I can certainly create my own fork of vectorize_engine,\n but I think it will be beter if I push pg13 branch in\n your repository.\n\n\n\n\n\n\n\n\n -- \n\nThanks\n \n\nHubert Zhang\n\n\n\n\n\n\n\n -- \n\nThanks\n \n\nHubert Zhang\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 25 Feb 2020 13:44:25 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "I have ported vectorize_engine for zedstore (vertical table AM).\nResults of TPCH-10G/Q1 are the following:\n\npar.workers\n\tPG9_6\nvectorize=off\n\tPG9_6\nvectorize=on\n\tmaster\nvectorize=off\njit=on\n\tmaster\nvectorize=off\njit=off \tmaster\nvectorize=on\njit=on \tmaster\nvectorize=on\njit=off \tzedstore vectorize=off\njit=on\n\tzedstore vectorize=off\njit=off \tzedstore vectorize=on\njit=on \tzedstore vectorize=on\njit=off\n0\n\t36\n\t20\n\t16\n\t25.5\n\t15\n\t17.5\n\t18\n\t26\n\t17\n\t19\n4\n\t10\n\t-\n\t5 \t7\n\t-\n\t- \t5\n\t7\n\t-\n\t-\n\n\nAs you can see from this table time of query execution without \nvectorization is almost the same for zedstore as for standard heap.\nIf means that expression execution overhead is dominated in this case \ndespite to the underlying storage.\nEnabling vectorize engine increases speed of zedstore as well as of \nstandard heap.\nBut still standard heap is faster.\n\nMay be my implementation of extracting data from zedstore is not optimal \n- I just calling in the loop zsbt_tid_scan_next + zsbt_attr_fetch.\nI attached my implementation of zedstoream_getnexttile (I have added \nscan_getnexttile to tableAM interface).\n\nAlso I noticed that currently zedstore doesn't correctly calculate set \nof used attributes and so is extract useless data.\nFor example query like \"select sum(x) from foo\" cause fetching of all \nattributes from foo although we need just \"x\".\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 25 Feb 2020 19:40:54 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On 25.02.2020 19:40, Konstantin Knizhnik wrote:\n> I have ported vectorize_engine for zedstore (vertical table AM).\n> Results of TPCH-10G/Q1 are the following:\n>\n> par.workers\n> \tPG9_6\n> vectorize=off\n> \tPG9_6\n> vectorize=on\n> \tmaster\n> vectorize=off\n> jit=on\n> \tmaster\n> vectorize=off\n> jit=off \tmaster\n> vectorize=on\n> jit=on \tmaster\n> vectorize=on\n> jit=off \tzedstore vectorize=off\n> jit=on\n> \tzedstore vectorize=off\n> jit=off \tzedstore vectorize=on\n> jit=on \tzedstore vectorize=on\n> jit=off\n> 0\n> \t36\n> \t20\n> \t16\n> \t25.5\n> \t15\n> \t17.5\n> \t18\n> \t26\n> \t17\n> \t19\n> 4\n> \t10\n> \t-\n> \t5 \t7\n> \t-\n> \t- \t5\n> \t7\n> \t-\n> \t-\n>\n>\n\nAfter correct calculation of used columns bitmapset and passing it to \ntable_beginscan_with_column_projection function zedstore+vectorize_engine\nshow the best result (without parallel execution):\n\n\npar.workers\n\tPG9_6\nvectorize=off\n\tPG9_6\nvectorize=on\n\tmaster\nvectorize=off\njit=on\n\tmaster\nvectorize=off\njit=off \tmaster\nvectorize=on\njit=on \tmaster\nvectorize=on\njit=off \tzedstore vectorize=off\njit=on\n\tzedstore vectorize=off\njit=off \tzedstore vectorize=on\njit=on \tzedstore vectorize=on\njit=off\n0\n\t36\n\t20\n\t16\n\t25.5\n\t15\n\t17.5\n\t18\n\t26\n\t14\n\t16\n4\n\t10\n\t-\n\t5 \t7\n\t-\n\t- \t5\n\t7\n\t-\n\t-\n\n\n\nbut still the difference with vanilla is minimal.\n\n\nProfiler top is the following:\n\n 16.30% postgres postgres [.] zedstoream_getnexttile\n 6.98% postgres postgres [.] decode_chunk\n 6.68% postgres liblz4.so.1.7.1 [.] LZ4_decompress_safe\n 5.37% postgres vectorize_engine.so [.] vfloat8_accum\n 5.23% postgres postgres [.] bpchareq\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 25.02.2020 19:40, Konstantin\n Knizhnik wrote:\n\n\n\n I have ported vectorize_engine for zedstore (vertical table AM).\n Results of TPCH-10G/Q1 are the following:\n\n\n\n\npar.workers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=on\nmaster\n vectorize=on\n jit=off\nzedstore vectorize=off\n jit=on\n\nzedstore vectorize=off\n jit=off\nzedstore vectorize=on\n jit=on\nzedstore vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n18\n\n26\n\n17\n\n19\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n5\n\n7\n\n-\n\n-\n\n\n\n\n\n\n\n After correct calculation of used columns bitmapset and passing it\n to table_beginscan_with_column_projection function\n zedstore+vectorize_engine\n show the best result (without parallel execution):\n\n\n\n\n\npar.workers\n\nPG9_6\n vectorize=off\n\nPG9_6\n vectorize=on\n\nmaster\n vectorize=off\n jit=on\n\nmaster\n vectorize=off\n jit=off\nmaster\n vectorize=on\n jit=on\nmaster\n vectorize=on\n jit=off\nzedstore vectorize=off\n jit=on\n\nzedstore vectorize=off\n jit=off\nzedstore vectorize=on\n jit=on\nzedstore vectorize=on\n jit=off\n\n\n0\n\n36\n\n20\n\n16\n\n25.5\n\n15\n\n17.5\n\n18\n\n26\n\n14\n\n16\n\n\n\n4\n\n10\n\n-\n\n5\n7\n\n-\n\n-\n5\n\n7\n\n-\n\n-\n\n\n\n\n\n but still the difference with vanilla is minimal.\n\n\n Profiler top is the following:\n \n 16.30% postgres postgres [.] zedstoream_getnexttile\n 6.98% postgres postgres [.] decode_chunk\n 6.68% postgres liblz4.so.1.7.1 [.] LZ4_decompress_safe\n 5.37% postgres vectorize_engine.so [.] vfloat8_accum\n 5.23% postgres postgres [.] bpchareq\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 26 Feb 2020 12:51:35 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Hi Konstantin,\n\nOn Tue, Feb 25, 2020 at 6:44 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 25.02.2020 11:06, Hubert Zhang wrote:\n>\n> Hi Konstantin,\n>\n> I checkout your branch pg13 in repo\n> https://github.com/zhangh43/vectorize_engine\n> After I fixed some compile error, I tested Q1 on TPCH-10G\n> The result is different from yours and vectorize version is too slow. Note\n> that I disable parallel worker by default.\n> no JIT no Vectorize: 36 secs\n> with JIT only: 23 secs\n> with Vectorize only: 33 secs\n> JIT + Vectorize: 29 secs\n>\n> My config option is `CFLAGS='-O3 -g -march=native'\n> --prefix=/usr/local/pgsql/ --disable-cassert --enable-debug --with-llvm`\n> I will do some spike on why vectorized is so slow. Could you please\n> provide your compile option and the TPCH dataset size and your\n> queries(standard Q1?) to help me to debug on it.\n>\n>\n>\n> Hi, Hubert\n>\n> Sorry, looks like I have used slightly deteriorated snapshot of master so\n> I have not noticed some problems.\n> Fixes are committed.\n>\n> Most of the time is spent in unpacking heap tuple\n> (tts_buffer_heap_getsomeattrs):\n>\n> 24.66% postgres postgres [.] tts_buffer_heap_getsomeattrs\n> 8.28% postgres vectorize_engine.so [.] VExecStoreColumns\n> 5.94% postgres postgres [.] HeapTupleSatisfiesVisibility\n> 4.21% postgres postgres [.] bpchareq\n> 4.12% postgres vectorize_engine.so [.] vfloat8_accum\n>\n>\n> In my version of nodeSeqscan I do not keep all fetched 1024 heap tuples\n> but stored there attribute values in vector columns immediately.\n> But to avoid extraction of useless data it is necessary to know list of\n> used columns.\n> The same problem is solved in zedstore, but unfortunately there is no\n> existed method in Postgres to get list\n> of used attributes. I have done it but my last implementation contains\n> error which cause loading of all columns.\n> Fixed version is committed.\n>\n> Now profile without JIT is:\n>\n> 15.52% postgres postgres [.] tts_buffer_heap_getsomeattrs\n> 10.25% postgres postgres [.] ExecInterpExpr\n> 6.54% postgres postgres [.] HeapTupleSatisfiesVisibility\n> 5.12% postgres vectorize_engine.so [.] VExecStoreColumns\n> 4.86% postgres postgres [.] bpchareq\n> 4.80% postgres vectorize_engine.so [.] vfloat8_accum\n> 3.78% postgres postgres [.] tts_minimal_getsomeattrs\n> 3.66% postgres vectorize_engine.so [.] VExecAgg\n> 3.38% postgres postgres [.] hashbpchar\n>\n> and with JIT:\n>\n> 13.88% postgres postgres [.] tts_buffer_heap_getsomeattrs\n> 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n> 6.03% postgres postgres [.] HeapTupleSatisfiesVisibility\n> 5.55% postgres postgres [.] bpchareq\n> 4.42% postgres vectorize_engine.so [.] VExecStoreColumns\n> 4.19% postgres postgres [.] hashbpchar\n> 4.09% postgres vectorize_engine.so [.] vfloat8pl\n>\n>\nI also tested Q1 with your latest code. Result of vectorized is still slow.\nPG13 native: 38 secs\nPG13 Vec: 30 secs\nPG13 JIT: 23 secs\nPG13 JIT+Vec: 27 secs\n\nMy perf result is as belows. There are three parts:\n1. lookup_hash_entry(43.5%) this part is not vectorized yet.\n2. scan part: fetch_input_tuple(36%)\n3. vadvance_aggregates part(20%)\nI also perfed on PG96 vectorized version and got similar perf results and\nrunning time of vectorized PG96 and PG13 are also similar. But PG13 is much\nfaster than PG96. So I just wonder whether we merge all the latest executor\ncode of PG13 into the vectorized PG13 branch?\n\n- agg_fill_hash_table ◆ - 43.50% lookup_hash_entry (inlined) ▒ + 39.07%\nLookupTupleHashEntry ▒ 0.56% ExecClearTuple (inlined) ▒ - 36.06%\nfetch_input_tuple ▒ - ExecProcNode (inlined) ▒ - 36.03% VExecScan ▒ -\n34.60% ExecScanFetch (inlined) ▒ - ExecScanFetch (inlined) ▒ - VSeqNext ▒ +\n16.64% table_scan_getnextslot (inlined) ▒ - 10.29% slot_getsomeattrs\n(inlined) ▒ - 10.17% slot_getsomeattrs_int ▒ + tts_buffer_heap_getsomeattrs\n▒ 7.14% VExecStoreColumns ▒ + 1.38% ExecQual (inlined) ▒ - 20.30%\nVadvance_aggregates (inlined) ▒ - 17.46% Vadvance_transition_function\n(inlined) ▒ + 11.95% vfloat8_accum ▒ + 4.74% vfloat8pl ▒ 0.75% vint8inc_any\n▒ + 2.77% ExecProject (inlined)\n\n-- \nThanks\n\nHubert Zhang\n\nHi Konstantin,On Tue, Feb 25, 2020 at 6:44 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 25.02.2020 11:06, Hubert Zhang\n wrote:\n\n\nHi Konstantin,\n \n\nI checkout your branch pg13 in repo https://github.com/zhangh43/vectorize_engine\nAfter I fixed some compile error, I tested Q1 on TPCH-10G\nThe result is different from yours and vectorize version is\n too slow. Note that I disable parallel worker by default.\nno JIT no Vectorize: 36 secs\nwith JIT only: 23 secs\nwith Vectorize only: 33 secs\nJIT + Vectorize: 29 secs\n\n\nMy config option is `CFLAGS='-O3 -g -march=native'\n --prefix=/usr/local/pgsql/ --disable-cassert --enable-debug\n --with-llvm`\nI will do some spike on why vectorized is so slow. Could\n you please provide your compile option and the TPCH dataset\n size and your queries(standard Q1?) to help me to debug on it.\n\n\n\n\n\n Hi, Hubert\n\n Sorry, looks like I have used slightly deteriorated snapshot of\n master so I have not noticed some problems.\n Fixes are committed.\n\n Most of the time is spent in unpacking heap tuple\n (tts_buffer_heap_getsomeattrs):\n\n 24.66% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 8.28% postgres vectorize_engine.so [.] VExecStoreColumns\n 5.94% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 4.21% postgres postgres [.] bpchareq\n 4.12% postgres vectorize_engine.so [.] vfloat8_accum\n\n\n In my version of nodeSeqscan I do not keep all fetched 1024 heap\n tuples but stored there attribute values in vector columns\n immediately.\n But to avoid extraction of useless data it is necessary to know list\n of used columns.\n The same problem is solved in zedstore, but unfortunately there is\n no existed method in Postgres to get list \n of used attributes. I have done it but my last implementation\n contains error which cause loading of all columns.\n Fixed version is committed.\n\n Now profile without JIT is:\n\n 15.52% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 10.25% postgres postgres [.] ExecInterpExpr\n 6.54% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 5.12% postgres vectorize_engine.so [.] VExecStoreColumns\n 4.86% postgres postgres [.] bpchareq\n 4.80% postgres vectorize_engine.so [.] vfloat8_accum\n 3.78% postgres postgres [.]\n tts_minimal_getsomeattrs\n 3.66% postgres vectorize_engine.so [.] VExecAgg\n 3.38% postgres postgres [.] hashbpchar\n\n and with JIT:\n\n 13.88% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n 6.03% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 5.55% postgres postgres [.] bpchareq\n 4.42% postgres vectorize_engine.so [.] VExecStoreColumns\n 4.19% postgres postgres [.] hashbpchar\n 4.09% postgres vectorize_engine.so [.] vfloat8pl\n\nI also tested Q1 with your latest code. Result of vectorized is still slow.PG13 native: 38 secsPG13 Vec: 30 secsPG13 JIT: 23 secsPG13 JIT+Vec: 27 secsMy perf result is as belows. There are three parts: 1. lookup_hash_entry(43.5%) this part is not vectorized yet.2. scan part: fetch_input_tuple(36%)3. vadvance_aggregates part(20%)I also perfed on PG96 vectorized version and got similar perf results and running time of vectorized PG96 and PG13 are also similar. But PG13 is much faster than PG96. So I just wonder whether we merge all the latest executor code of PG13 into the vectorized PG13 branch? - agg_fill_hash_table ◆\n - 43.50% lookup_hash_entry (inlined) ▒\n + 39.07% LookupTupleHashEntry ▒\n 0.56% ExecClearTuple (inlined) ▒\n - 36.06% fetch_input_tuple ▒\n - ExecProcNode (inlined) ▒\n - 36.03% VExecScan ▒\n - 34.60% ExecScanFetch (inlined) ▒\n - ExecScanFetch (inlined) ▒\n - VSeqNext ▒\n + 16.64% table_scan_getnextslot (inlined) ▒\n - 10.29% slot_getsomeattrs (inlined) ▒\n - 10.17% slot_getsomeattrs_int ▒\n + tts_buffer_heap_getsomeattrs ▒\n 7.14% VExecStoreColumns ▒\n + 1.38% ExecQual (inlined) ▒\n - 20.30% Vadvance_aggregates (inlined) ▒\n - 17.46% Vadvance_transition_function (inlined) ▒\n + 11.95% vfloat8_accum ▒\n + 4.74% vfloat8pl ▒\n 0.75% vint8inc_any ▒\n + 2.77% ExecProject (inlined)-- ThanksHubert Zhang",
"msg_date": "Wed, 26 Feb 2020 18:11:26 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On 26.02.2020 13:11, Hubert Zhang wrote:\n>\n>\n> and with JIT:\n>\n> 13.88% postgres postgres [.]\n> tts_buffer_heap_getsomeattrs\n> 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n> 6.03% postgres postgres [.]\n> HeapTupleSatisfiesVisibility\n> 5.55% postgres postgres [.] bpchareq\n> 4.42% postgres vectorize_engine.so [.] VExecStoreColumns\n> 4.19% postgres postgres [.] hashbpchar\n> 4.09% postgres vectorize_engine.so [.] vfloat8pl\n>\n>\n> I also tested Q1 with your latest code. Result of vectorized is still \n> slow.\n> PG13 native: 38 secs\n> PG13 Vec: 30 secs\n> PG13 JIT: 23 secs\n> PG13 JIT+Vec: 27 secs\n>\n\nIt is strange that your results are much slower than my and profile is \nvery different.\nWhich postgres configuration you are using?\n\n\n> My perf result is as belows. There are three parts:\n> 1. lookup_hash_entry(43.5%) this part is not vectorized yet.\nIt is vectorized in some sense: lookup_hash_entry performs bulk of hash \nlookups and pass array with results of such lookups to aggregate \ntransmit functions.\nIt will be possible to significantly increase speed of HashAgg if we \nstore data in order of grouping attributes and use RLE (run length \nencoding) to peform just one\nhash lookup for group of values. But it requires creation of special \npartitions (like it is done in Vertica and VOPS).\n\n> 2. scan part: fetch_input_tuple(36%)\n> 3. vadvance_aggregates part(20%)\n> I also perfed on PG96 vectorized version and got similar perf results \n> and running time of vectorized PG96 and PG13 are also similar. But \n> PG13 is much faster than PG96. So I just wonder whether we merge all \n> the latest executor code of PG13 into the vectorized PG13 branch?\n\nSorry, I do not understand the question. vectorize_executor contains \npatched versions of nodeSeqscan and nodeAgg from standard executor.\nWhen performing porting to PG13, I took the latest version of nodeAgg \nand tried to apply your patches to it. Certainly not always it was \npossible and I have to rewrite a lt of places. Concerning nodeSeqscan - \nI took old version from vectorize_executor and port it to PG13.\n\n>\n> - agg_fill_hash_table ◆ - 43.50% lookup_hash_entry (inlined) ▒ + \n> 39.07% LookupTupleHashEntry ▒ 0.56% ExecClearTuple (inlined) ▒ - \n> 36.06% fetch_input_tuple ▒ - ExecProcNode (inlined) ▒ - 36.03% \n> VExecScan ▒ - 34.60% ExecScanFetch (inlined) ▒ - ExecScanFetch \n> (inlined) ▒ - VSeqNext ▒ + 16.64% table_scan_getnextslot (inlined) ▒ - \n> 10.29% slot_getsomeattrs (inlined) ▒ - 10.17% slot_getsomeattrs_int ▒ \n> + tts_buffer_heap_getsomeattrs ▒ 7.14% VExecStoreColumns ▒ + 1.38% \n> ExecQual (inlined) ▒ - 20.30% Vadvance_aggregates (inlined) ▒ - 17.46% \n> Vadvance_transition_function (inlined) ▒ + 11.95% vfloat8_accum ▒ + \n> 4.74% vfloat8pl ▒ 0.75% vint8inc_any ▒ + 2.77% ExecProject (inlined)\n>\n\nIt is strange that I am not seeing lookup_hash_entry in profile in my case.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 26.02.2020 13:11, Hubert Zhang\n wrote:\n\n\n\n\n\n\n\n and with JIT:\n\n 13.88% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n 6.03% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 5.55% postgres postgres [.] bpchareq\n 4.42% postgres vectorize_engine.so [.]\n VExecStoreColumns\n 4.19% postgres postgres [.] hashbpchar\n 4.09% postgres vectorize_engine.so [.] vfloat8pl\n\n\n\n\n\n\nI also tested Q1 with your latest code. Result of\n vectorized is still slow.\nPG13 native: 38 secs\nPG13 Vec: 30 secs\n\nPG13 JIT: 23 secs\n\nPG13 JIT+Vec: 27 secs\n\n\n\n\n\n\n It is strange that your results are much slower than my and profile\n is very different.\n Which postgres configuration you are using?\n\n\n\n\nMy perf result is as belows. There are three parts: \n1. lookup_hash_entry(43.5%) this part is not vectorized\n yet.\n\n\n It is vectorized in some sense: lookup_hash_entry performs bulk of\n hash lookups and pass array with results of such lookups to\n aggregate transmit functions.\n It will be possible to significantly increase speed of HashAgg if we\n store data in order of grouping attributes and use RLE (run length\n encoding) to peform just one \n hash lookup for group of values. But it requires creation of special\n partitions (like it is done in Vertica and VOPS).\n\n\n\n2. scan part: fetch_input_tuple(36%)\n3. vadvance_aggregates part(20%)\nI also perfed on PG96 vectorized version and got similar\n perf results and running time of vectorized PG96 and PG13 are\n also similar. But PG13 is much faster than PG96. So I just\n wonder whether we merge all the latest executor code of PG13\n into the vectorized PG13 branch? \n\n\n\n\n Sorry, I do not understand the question. vectorize_executor contains\n patched versions of nodeSeqscan and nodeAgg from standard executor.\n When performing porting to PG13, I took the latest version of\n nodeAgg and tried to apply your patches to it. Certainly not always\n it was possible and I have to rewrite a lt of places. Concerning\n nodeSeqscan - I took old version from vectorize_executor and port it\n to PG13.\n\n\n\n\n\n- agg_fill_hash_table ◆\n - 43.50% lookup_hash_entry (inlined) ▒\n + 39.07% LookupTupleHashEntry ▒\n 0.56% ExecClearTuple (inlined) ▒\n - 36.06% fetch_input_tuple ▒\n - ExecProcNode (inlined) ▒\n - 36.03% VExecScan ▒\n - 34.60% ExecScanFetch (inlined) ▒\n - ExecScanFetch (inlined) ▒\n - VSeqNext ▒\n + 16.64% table_scan_getnextslot (inlined) ▒\n - 10.29% slot_getsomeattrs (inlined) ▒\n - 10.17% slot_getsomeattrs_int ▒\n + tts_buffer_heap_getsomeattrs ▒\n 7.14% VExecStoreColumns ▒\n + 1.38% ExecQual (inlined) ▒\n - 20.30% Vadvance_aggregates (inlined) ▒\n - 17.46% Vadvance_transition_function (inlined) ▒\n + 11.95% vfloat8_accum ▒\n + 4.74% vfloat8pl ▒\n 0.75% vint8inc_any ▒\n + 2.77% ExecProject (inlined)\n\n\n\n\n\n\n It is strange that I am not seeing lookup_hash_entry in\n profile in my case.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 26 Feb 2020 14:59:27 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "On Wed, Feb 26, 2020 at 7:59 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 26.02.2020 13:11, Hubert Zhang wrote:\n>\n>\n>\n>> and with JIT:\n>>\n>> 13.88% postgres postgres [.] tts_buffer_heap_getsomeattrs\n>> 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n>> 6.03% postgres postgres [.] HeapTupleSatisfiesVisibility\n>> 5.55% postgres postgres [.] bpchareq\n>> 4.42% postgres vectorize_engine.so [.] VExecStoreColumns\n>> 4.19% postgres postgres [.] hashbpchar\n>> 4.09% postgres vectorize_engine.so [.] vfloat8pl\n>>\n>>\n> I also tested Q1 with your latest code. Result of vectorized is still slow.\n> PG13 native: 38 secs\n> PG13 Vec: 30 secs\n> PG13 JIT: 23 secs\n> PG13 JIT+Vec: 27 secs\n>\n>\n> It is strange that your results are much slower than my and profile is\n> very different.\n> Which postgres configuration you are using?\n>\n>\n./configure CFLAGS=\"-O3 -g -march=native\" --prefix=/usr/local/pgsql/\n--disable-cassert --enable-debug --with-llvm\n I also use `PGXS := $(shell $(PG_CONFIG) --pgxs)` to compile\nvectorized_engine. So it will share the same compile configuration.\n\nMy perf result is as belows. There are three parts:\n> 1. lookup_hash_entry(43.5%) this part is not vectorized yet.\n>\n> It is vectorized in some sense: lookup_hash_entry performs bulk of hash\n> lookups and pass array with results of such lookups to aggregate transmit\n> functions.\n> It will be possible to significantly increase speed of HashAgg if we store\n> data in order of grouping attributes and use RLE (run length encoding) to\n> peform just one\n> hash lookup for group of values. But it requires creation of special\n> partitions (like it is done in Vertica and VOPS).\n>\n>\nYes, Vertica's partition needed to be pre-sorted on user defined columns.\nSo for TPCH Q1 on Postgres, we could not have that assumption. And my Q1\nplan uses HashAgg instead of GroupAgg based on cost.\n\n\n> 2. scan part: fetch_input_tuple(36%)\n> 3. vadvance_aggregates part(20%)\n> I also perfed on PG96 vectorized version and got similar perf results and\n> running time of vectorized PG96 and PG13 are also similar. But PG13 is much\n> faster than PG96. So I just wonder whether we merge all the latest executor\n> code of PG13 into the vectorized PG13 branch?\n>\n>\n> Sorry, I do not understand the question. vectorize_executor contains\n> patched versions of nodeSeqscan and nodeAgg from standard executor.\n> When performing porting to PG13, I took the latest version of nodeAgg and\n> tried to apply your patches to it. Certainly not always it was possible and\n> I have to rewrite a lt of places. Concerning nodeSeqscan - I took old\n> version from vectorize_executor and port it to PG13.\n>\n\n> It is strange that I am not seeing lookup_hash_entry in profile in my\n> case.\n>\n>\nSo you already have the PG13 nodeAgg, that is good.\nYes, it is strange. Hash table probing is always the costly part.\nMy perf command `perf record --call-graph dwarf -p pid`\nCould you share your lineitem schema and Q1 query?\nMy schema and Q1 query are:\nCREATE TABLE lineitem (\n l_orderkey BIGINT NOT NULL,\n l_partkey INTEGER NOT NULL,\n l_suppkey INTEGER NOT NULL,\n l_linenumber INTEGER NOT NULL,\n l_quantity double precision NOT NULL,\n l_extendedprice double precision NOT NULL,\n l_discount double precision NOT NULL,\n l_tax double precision NOT NULL,\n l_returnflag CHAR(1) NOT NULL,\n l_linestatus CHAR(1) NOT NULL,\n l_shipdate DATE NOT NULL,\n l_commitdate DATE NOT NULL,\n l_receiptdate DATE NOT NULL,\n l_shipinstruct CHAR(25) NOT NULL,\n l_shipmode CHAR(10) NOT NULL,\n l_comment VARCHAR(44) NOT NULL\n);\nselect\n l_returnflag,\n l_linestatus,\n sum(l_quantity) as sum_qty,\n sum(l_extendedprice) as sum_base_price,\n sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,\n sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,\n avg(l_quantity) as avg_qty,\n avg(l_extendedprice) as avg_price,\n avg(l_discount) as avg_disc,\n count(l_discount) as count_order\nfrom\n lineitem\nwhere\n l_shipdate <= date '1998-12-01' - interval '106 day'\ngroup by\n l_returnflag,\n l_linestatus\n;\n\n\n-- \nThanks\n\nHubert Zhang\n\nOn Wed, Feb 26, 2020 at 7:59 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n\n\nOn 26.02.2020 13:11, Hubert Zhang\n wrote:\n\n\n\n\n\n\n and with JIT:\n\n 13.88% postgres postgres [.]\n tts_buffer_heap_getsomeattrs\n 7.15% postgres vectorize_engine.so [.] vfloat8_accum\n 6.03% postgres postgres [.]\n HeapTupleSatisfiesVisibility\n 5.55% postgres postgres [.] bpchareq\n 4.42% postgres vectorize_engine.so [.]\n VExecStoreColumns\n 4.19% postgres postgres [.] hashbpchar\n 4.09% postgres vectorize_engine.so [.] vfloat8pl\n\n\n\n\n\n\nI also tested Q1 with your latest code. Result of\n vectorized is still slow.\nPG13 native: 38 secs\nPG13 Vec: 30 secs\n\nPG13 JIT: 23 secs\n\nPG13 JIT+Vec: 27 secs\n\n\n\n\n\n\n It is strange that your results are much slower than my and profile\n is very different.\n Which postgres configuration you are using?./configure CFLAGS=\"-O3 -g -march=native\" --prefix=/usr/local/pgsql/ --disable-cassert --enable-debug --with-llvm I also use `PGXS := $(shell $(PG_CONFIG) --pgxs)` to compile vectorized_engine. So it will share the same compile configuration.\n\n\nMy perf result is as belows. There are three parts: \n1. lookup_hash_entry(43.5%) this part is not vectorized\n yet.\n\n\n It is vectorized in some sense: lookup_hash_entry performs bulk of\n hash lookups and pass array with results of such lookups to\n aggregate transmit functions.\n It will be possible to significantly increase speed of HashAgg if we\n store data in order of grouping attributes and use RLE (run length\n encoding) to peform just one \n hash lookup for group of values. But it requires creation of special\n partitions (like it is done in Vertica and VOPS).\nYes, Vertica's partition needed to be pre-sorted on user defined columns. So for TPCH Q1 on Postgres, we could not have that assumption. And my Q1 plan uses HashAgg instead of GroupAgg based on cost. \n\n\n2. scan part: fetch_input_tuple(36%)\n3. vadvance_aggregates part(20%)\nI also perfed on PG96 vectorized version and got similar\n perf results and running time of vectorized PG96 and PG13 are\n also similar. But PG13 is much faster than PG96. So I just\n wonder whether we merge all the latest executor code of PG13\n into the vectorized PG13 branch? \n\n\n\n\n Sorry, I do not understand the question. vectorize_executor contains\n patched versions of nodeSeqscan and nodeAgg from standard executor.\n When performing porting to PG13, I took the latest version of\n nodeAgg and tried to apply your patches to it. Certainly not always\n it was possible and I have to rewrite a lt of places. Concerning\n nodeSeqscan - I took old version from vectorize_executor and port it\n to PG13.\n It is strange that I am not seeing lookup_hash_entry in\n profile in my case.\n\nSo you already have the PG13 nodeAgg, that is good.Yes, it is strange. Hash table probing is always the costly part.My perf command `perf record --call-graph dwarf -p pid`Could you share your lineitem schema and Q1 query?My schema and Q1 query are:CREATE TABLE lineitem ( l_orderkey BIGINT NOT NULL, l_partkey INTEGER NOT NULL, l_suppkey INTEGER NOT NULL, l_linenumber INTEGER NOT NULL, l_quantity double precision NOT NULL, l_extendedprice double precision NOT NULL, l_discount double precision NOT NULL, l_tax double precision NOT NULL, l_returnflag CHAR(1) NOT NULL, l_linestatus CHAR(1) NOT NULL, l_shipdate DATE NOT NULL, l_commitdate DATE NOT NULL, l_receiptdate DATE NOT NULL, l_shipinstruct CHAR(25) NOT NULL, l_shipmode CHAR(10) NOT NULL, l_comment VARCHAR(44) NOT NULL);select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(l_discount) as count_orderfrom lineitemwhere l_shipdate <= date '1998-12-01' - interval '106 day'group by l_returnflag, l_linestatus;-- ThanksHubert Zhang",
"msg_date": "Thu, 27 Feb 2020 11:01:27 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "Hi Konstantin,\nI also vimdiff nodeAgg.c in your PG13 branch with nodeAgg.c in pg's main\nrepo.\nMany functions has changed from PG96 to PG13, e.g. 'advance_aggregates',\n'lookup_hash_entry'\nThe vectorized nodeAgg seems still follow the PG96 way of implementing\nthese functions.\nIn general, I think we'd better port executor of PG13 to vectorized\nexecutor of PG13 instead of merge some PG13 code into vectorized executor\nof PG96 to make it works. Because It's hard to determine which functions\nneed to be merged and it's buggy if the executor code of both PG13 and PG96\nexist in one branch.\n\nWhat's your opinion?\n\nHi Konstantin,I also vimdiff nodeAgg.c in your PG13 branch with nodeAgg.c in pg's main repo.Many functions has changed from PG96 to PG13, e.g. 'advance_aggregates', 'lookup_hash_entry'The vectorized nodeAgg seems still follow the PG96 way of implementing these functions.In general, I think we'd better port executor of PG13 to vectorized executor of PG13 instead of merge some PG13 code into vectorized executor of PG96 to make it works. Because It's hard to determine which functions need to be merged and it's buggy if the executor code of both PG13 and PG96 exist in one branch. What's your opinion?",
"msg_date": "Thu, 27 Feb 2020 16:09:05 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Yet another vectorized engine"
},
{
"msg_contents": "\n\nOn 27.02.2020 11:09, Hubert Zhang wrote:\n> Hi Konstantin,\n> I also vimdiff nodeAgg.c in your PG13 branch with nodeAgg.c in pg's \n> main repo.\n> Many functions has changed from PG96 to PG13, e.g. \n> 'advance_aggregates', 'lookup_hash_entry'\n> The vectorized nodeAgg seems still follow the PG96 way of implementing \n> these functions.\n> In general, I think we'd better port executor of PG13 to vectorized \n> executor of PG13 instead of merge some PG13 code into vectorized \n> executor of PG96 to make it works. Because It's hard to determine \n> which functions need to be merged and it's buggy if the executor code \n> of both PG13 and PG96 exist in one branch.\n>\n> What's your opinion?\n>\n\nIn new version of Postgres all logic of aggregates transition is \nencapsulated in expression and performed by execExprInterp or generated \nGIT code.\nIf we not going to embed vectorize engine in kernel and continue to \ndevelop it as extension, then I do not have any good idea how to achieve \nit without\ncopying and patching code of ExecInterpExpr.\n\nIn any case, the current prototype doesn't show any noticeable \nperformance improvement comparing with existed executor with enabled JIT.\nAnd providing vectorized version of ExecInterpExpr will not help to \nincrease speed (according to profile time is spent in other places).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 27 Feb 2020 12:03:54 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Yet another vectorized engine"
}
] |
[
{
"msg_contents": "Here's a silly insomnia-inspired little patch that removes useless\n\"return;\" lines from some routines. (I left some alone, because they\nseemed to be there more for documentation purposes, such as the ones in\nfrom_char_set_mode and from_char_set_int; also regcomp.c since the\npedigree there is unclear.)\n\nThis seems pretty uncontroversial, so I'm not thinking of waiting over\nthe US holidays to get opinions about it ...\n\n-- \n�lvaro Herrera PostgreSQL Expert, https://www.2ndQuadrant.com/",
"msg_date": "Thu, 28 Nov 2019 11:46:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove useless returns"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Here's a silly insomnia-inspired little patch that removes useless\n> \"return;\" lines from some routines.\n\n+1, I always thought that was poor style.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Nov 2019 10:26:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove useless returns"
}
] |
[
{
"msg_contents": "Hackers,\n\nWhile working on the problem of XID wraparound within the LISTEN/NOTIFY\nsystem, I tried to increment XIDs by more than one per transaction. \nThis leads to a number of test failures, many which look like:\n\n+ERROR: could not access status of transaction 7485\n+DETAIL: Could not read from file \"pg_subtrans/0000\" at offset 24576: \nread too few bytes.\n\nI might not have read the right documentation, but....\n\nI do not see anything in src/backend/access/transam/README nor elsewhere\ndocumenting a design decision or assumption that transaction IDs must\nbe assigned contiguously. I suppose this is such a fundamental\nassumption that it is completely implicit and nobody thought to document\nit, but I'd like to check for two reasons:\n\nFirst, I'd like a good method of burning through transaction ids in\ntests designed to check for problems in XID wrap-around.\n\nSecond, I'd like to add Asserts where appropriate regarding this\nassumption. It seems strange to me that I should have gotten as far\nas a failing read() without having tripped an Assert somewhere along the\nway.\n\nTo duplicate the errors I hit, you can either apply this simple change:\n\n\ndiff --git a/src/include/access/transam.h b/src/include/access/transam.h\nindex 33fd052156..360b7335bb 100644\n--- a/src/include/access/transam.h\n+++ b/src/include/access/transam.h\n@@ -83,7 +83,7 @@ FullTransactionIdFromEpochAndXid(uint32 epoch, \nTransactionId xid)\n static inline void\n FullTransactionIdAdvance(FullTransactionId *dest)\n {\n- dest->value++;\n+ dest->value += 2;\n while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n dest->value++;\n }\n\n\nor apply the much larger WIP patch, attached, and then be sure to\nprovide the --enable-xidcheck flag to configure before building.\n\n-- \nMark Dilger",
"msg_date": "Thu, 28 Nov 2019 10:51:48 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Do XID sequences need to be contiguous?"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> While working on the problem of XID wraparound within the LISTEN/NOTIFY\n> system, I tried to increment XIDs by more than one per transaction. \n> This leads to a number of test failures, many which look like:\n\nIIRC, the XID-creation logic is designed to initialize the next clog\npage whenever it allocates an exact-multiple-of-BLCKSZ*4 transaction\nnumber. Skipping over such numbers would create trouble.\n\n> First, I'd like a good method of burning through transaction ids in\n> tests designed to check for problems in XID wrap-around.\n\nDon't \"burn through them\". Stop the cluster and use pg_resetwal to\nset the XID counter wherever you want it. (You might need to set it\njust before a page or segment boundary; I'm not sure if pg_resetwal\nhas any logic of its own to initialize a new CLOG page/file when you\nmove the counter this way. Perhaps it's worth improving that.)\n\n> Second, I'd like to add Asserts where appropriate regarding this\n> assumption.\n\nI'm not excited about that, and it's *certainly* not a problem that\njustifies additional configure infrastructure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Nov 2019 16:25:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do XID sequences need to be contiguous?"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 12:22 AM Mark Dilger <hornschnorter@gmail.com> wrote:\n>\n> Hackers,\n>\n> While working on the problem of XID wraparound within the LISTEN/NOTIFY\n> system, I tried to increment XIDs by more than one per transaction.\n> This leads to a number of test failures, many which look like:\n>\n> +ERROR: could not access status of transaction 7485\n> +DETAIL: Could not read from file \"pg_subtrans/0000\" at offset 24576:\n> read too few bytes.\n>\n> I might not have read the right documentation, but....\n>\n> I do not see anything in src/backend/access/transam/README nor elsewhere\n> documenting a design decision or assumption that transaction IDs must\n> be assigned contiguously. I suppose this is such a fundamental\n> assumption that it is completely implicit and nobody thought to document\n> it, but I'd like to check for two reasons:\n>\n> First, I'd like a good method of burning through transaction ids in\n> tests designed to check for problems in XID wrap-around.\n>\n\nAs Tom pointed out and as mentioned in the comments \"If we are\nallocating the first XID of a new page of the commit log, zero out\nthat commit-log page before returning.\", we need to take care of\nextending the CLOG while advancing TransactionIds. I have some old\nscript for burning transactionid's which I am attaching here. It\nmight help you. I think this is provided long back by Jeff Janes.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 29 Nov 2019 17:39:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do XID sequences need to be contiguous?"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWhen dealing with an OS upgrade, a some kind of anomaly related to\ncollations was found.\nSuppose, we have Debian 8 with postgresql 12 installed.\nThen we create a custom collation:\nCREATE COLLATION russian (provider=icu, locale='ru_RU');\nand\nSELECT oid, collname, collnamespace, collprovider, collversion FROM\npg_collation WHERE collname like 'ru%';\nreturns\n 12847 | ru-RU-x-icu | 11 | i | 58.0.0.50\n...\n 16384 | russian | 2200 | i | 58.0.0.50\nThen let's create two tables with text columns and indexes and fill them\nwith some data:\nCREATE TABLE test_icu_ru (f1 varchar COLLATE \"ru-RU-x-icu\", i int);\nINSERT INTO test_icu_ru SELECT chr(x), x FROM generate_series(1, 2000)\nas y(x); CREATE INDEX ON test_icu_ru (f1);\n\nCREATE TABLE test_icu_russian (f1 varchar COLLATE \"russian\", i int);\nINSERT INTO test_icu_russian SELECT chr(x), x FROM generate_series(1,\n2000) as y(x); CREATE INDEX ON test_icu_russian (f1);\n\nPerform two test queries:\npostgres=# select * from test_icu_ru where f1=chr(821);\n f1 | i \n----+-----\n ̵ | 821\n(1 row)\n\npostgres=# select * from test_icu_russian where f1=chr(821);\n f1 | i \n----+-----\n ̵ | 821\n(1 row)\n\npostgres=# EXPLAIN select * from test_icu_ru where f1=chr(821);\n QUERY\nPLAN \n--------------------------------------------------------------------------------------\n Index Scan using test_icu_ru_f1_idx on test_icu_ru (cost=0.28..8.29\nrows=1 width=6)\n Index Cond: ((f1)::text = '̵'::text)\n(2 rows)\n\npostgres=# EXPLAIN select * from test_icu_russian where f1=chr(821);\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------\n Index Scan using test_icu_russian_f1_idx on test_icu_russian \n(cost=0.28..8.29 rows=1 width=6)\n Index Cond: ((f1)::text = '̵'::text)\n(2 rows)\n(The indexes are indeed used by the above queries.)\n\nNow suppose that the OS is upgraded to Debian 9 (or the pgdata just\nmoved to Debian 9 with the postgresql 12).\nThe same queries return:\npostgres=# select * from test_icu_ru where f1=chr(821);\nWARNING: collation \"ru-RU-x-icu\" has version mismatch\nDETAIL: The collation in the database was created using version\n58.0.0.50, but the operating system provides version 153.64.29.\nHINT: Rebuild all objects affected by this collation and run ALTER\nCOLLATION pg_catalog.\"ru-RU-x-icu\" REFRESH VERSION, or build PostgreSQL\nwith the right library version.\n f1 | i\n----+---\n(0 rows)\n\npostgres=# select * from test_icu_russian where f1=chr(821);\nWARNING: collation \"russian\" has version mismatch\nDETAIL: The collation in the database was created using version\n58.0.0.50, but the operating system provides version 153.64.29.\nHINT: Rebuild all objects affected by this collation and run ALTER\nCOLLATION public.russian REFRESH VERSION, or build PostgreSQL with the\nright library version.\n f1 | i\n----+---\n(0 rows)\n\nWe get no data due to the real collation/sort order change but the\nwarning says what to do.\nThe query presented at\nhttps://www.postgresql.org/docs/12/sql-altercollation.html returns:\n Collation | Object \n-------------------------+-------------------------------------\n collation \"ru-RU-x-icu\" | column f1 of table test_icu_ru\n collation \"ru-RU-x-icu\" | index test_icu_ru_f1_idx\n collation russian | column f1 of table test_icu_russian\n collation russian | index test_icu_russian_f1_idx\nSo the documented behavior is observed.\n\nBut after pg_upgrade:\npg_createcluster 12 new\n/usr/lib/postgresql/12/bin/pg_upgrade -b /usr/lib/postgresql/12/bin -B\n/usr/lib/postgresql/12/bin -d /etc/postgresql/12/main -D\n/etc/postgresql/12/new\nIn the new cluster the same queries return:\npostgres=# select * from test_icu_russian where f1=chr(821);\nWARNING: collation \"russian\" has version mismatch\nDETAIL: The collation in the database was created using version\n58.0.0.50, but the operating system provides version 153.64.29.\nHINT: Rebuild all objects affected by this collation and run ALTER\nCOLLATION public.russian REFRESH VERSION, or build PostgreSQL with the\nright library version.\n f1 | i\n----+---\n(0 rows)\n\npostgres=# select * from test_icu_ru where f1=chr(821);\n f1 | i\n----+---\n(0 rows)\n(There is no warning for the predefined collation now.)\n\nThe query presented at\nhttps://www.postgresql.org/docs/12/sql-altercollation.html returns:\n Collation | Object \n-------------------+-------------------------------------\n collation russian | column f1 of table test_icu_russian\n collation russian | index test_icu_russian_f1_idx\n(2 rows)\n\nand\nSELECT oid, collname, collnamespace, collprovider, collversion FROM\npg_collation WHERE collname like 'ru%';\nreturns\n oid | collname | collnamespace | collprovider | collversion\n-------+-------------+---------------+--------------+-------------\n 12884 | ru-BY-x-icu | 11 | i | 153.64.29\n 12885 | ru-KG-x-icu | 11 | i | 153.64.29\n 12886 | ru-KZ-x-icu | 11 | i | 153.64.29\n 12887 | ru-MD-x-icu | 11 | i | 153.64.29\n 12888 | ru-RU-x-icu | 11 | i | 153.64.29\n 12889 | ru-UA-x-icu | 11 | i | 153.64.29\n 12883 | ru-x-icu | 11 | i | 153.64.29\n 12329 | ru_RU | 11 | c |\n 12328 | ru_RU.utf8 | 11 | c |\n 16402 | russian | 2200 | i | 58.0.0.50\nSo only the custom collation' version is actual, but predefined ones\ncorrespond to a newer libicu, but not to actual data.\n\nFor all that, REINDEX repairs both indexes:\npostgres=# REINDEX INDEX test_icu_russian_f1_idx;\nREINDEX\npostgres=# select * from test_icu_russian where f1=chr(821);\n f1 | i \n----+-----\n ̵ | 821\n(1 row)\n\npostgres=# REINDEX INDEX test_icu_ru_f1_idx;\nREINDEX\npostgres=# select * from test_icu_ru where f1=chr(821);\n f1 | i \n----+-----\n ̵ | 821\n(1 row)\n\nSo for now it seems dangerous to use predefined collations as their old\nversions are not preserved by pg_upgrade and the user doesn't know which\nindexes affected by the actual ICU collation changes.\n\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 28 Nov 2019 23:08:08 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade fails to preserve old versions of the predefined\n collations"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 9:08 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> So for now it seems dangerous to use predefined collations as their old\n> versions are not preserved by pg_upgrade and the user doesn't know which\n> indexes affected by the actual ICU collation changes.\n\nYeah, we noticed this while working on a proposal for new\nper-database-object version dependency tracking, and Peter E has\nwritten a patch to address it:\n\nhttps://commitfest.postgresql.org/25/2328/\n\n\n",
"msg_date": "Fri, 29 Nov 2019 09:25:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails to preserve old versions of the predefined\n collations"
},
{
"msg_contents": "28.11.2019 23:25, Thomas Munro пишет:\n> On Fri, Nov 29, 2019 at 9:08 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> So for now it seems dangerous to use predefined collations as their old\n>> versions are not preserved by pg_upgrade and the user doesn't know which\n>> indexes affected by the actual ICU collation changes.\n> Yeah, we noticed this while working on a proposal for new\n> per-database-object version dependency tracking, and Peter E has\n> written a patch to address it:\n>\n> https://commitfest.postgresql.org/25/2328/\nThank you! This patch is working for me. After pg_upgrade with the\napplied patch I'm getting:\npostgres=# SELECT oid, collname, collnamespace, collprovider,\ncollversion FROM pg_collation WHERE collname like 'ru%';\n oid | collname | collnamespace | collprovider | collversion\n-------+-------------+---------------+--------------+-------------\n 17561 | ru-BY-x-icu | 11 | i | 58.0.0.50\n 17562 | ru-KG-x-icu | 11 | i | 58.0.0.50\n 17563 | ru-KZ-x-icu | 11 | i | 58.0.0.50\n 17564 | ru-MD-x-icu | 11 | i | 58.0.0.50\n 17565 | ru-RU-x-icu | 11 | i | 58.0.0.50\n 17566 | ru-UA-x-icu | 11 | i | 58.0.0.50\n 17567 | ru-x-icu | 11 | i | 58.0.0.50\n 17568 | ru_RU | 11 | c |\n 17569 | ru_RU.utf8 | 11 | c |\n 17696 | russian | 2200 | i | 58.0.0.50\n(10 rows)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 28 Nov 2019 23:58:02 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails to preserve old versions of the predefined\n collations"
}
] |
[
{
"msg_contents": "This patch is a rather hacky implementation of the basic idea for\nimplementing FETCH ... WITH TIES, and potentially also PERCENT, by using\na window function expression to compute a stopping point.\n\nLarge chunks of this (the parser/ruleutils changes, docs, tests) are\ntaken from Surafel Temesgen's patch. The difference is that the executor\nchange in my version is minimal: Limit allows a boolean column in the\ninput to signal the point at which to stop. The planner inserts a\nWindowAgg node to compute the necessary condition using the rank()\nfunction.\n\nThe way this is done in the planner isn't (IMO) the best and should\nprobably be improved; in particular it currently misses some possible\noptimizations (most notably constant-folding of the offset+limit\nsubexpression). I also haven't tested it properly to see whether I broke\nanything, though it does pass regression.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Fri, 29 Nov 2019 05:39:57 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "A rather hackish POC for alternative implementation of WITH TIES"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 8:40 AM Andrew Gierth <andrew@tao11.riddles.org.uk>\nwrote:\n\n> This patch is a rather hacky implementation of the basic idea for\n> implementing FETCH ... WITH TIES, and potentially also PERCENT, by using\n> a window function expression to compute a stopping point.\n>\n> Large chunks of this (the parser/ruleutils changes, docs, tests) are\n> taken from Surafel Temesgen's patch. The difference is that the executor\n> change in my version is minimal: Limit allows a boolean column in the\n> input to signal the point at which to stop. The planner inserts a\n> WindowAgg node to compute the necessary condition using the rank()\n> function.\n>\n> The way this is done in the planner isn't (IMO) the best and should\n> probably be improved; in particular it currently misses some possible\n> optimizations (most notably constant-folding of the offset+limit\n> subexpression). I also haven't tested it properly to see whether I broke\n> anything, though it does pass regression.\n>\n>\n>\nUnlike most other executor node limit node has implementation for handling\nbackward scan that support cursor operation but your approach didn't do\nthis inherently because it outsource limitNode functionality to window\nfunction and window function didn't do this\n\neg.\n\npostgres=# begin;\n\nBEGIN\n\npostgres=# declare c cursor for select i from generate_series(1,1000000)\ns(i) order by i fetch first 2 rows with ties;\n\nDECLARE CURSOR\n\npostgres=# fetch all in c;\n\ni\n\n---\n\n1\n\n2\n\n(2 rows)\n\n\npostgres=# fetch backward all in c;\n\nERROR: cursor can only scan forward\n\nHINT: Declare it with SCROLL option to enable backward scan.\n\n\nEven with SCROLL option it is not working as limitNode does. It store the\nresult and return in backward scan that use more space than current limit\nand limit with ties implementation.\n\n\nIf am not mistaken the patch also reevaluate limit every time returning row\nbeside its not good for performance its will return incorrect result with\nlimit involving volatile function\n\n\nregards\n\nSurafel\n\nOn Fri, Nov 29, 2019 at 8:40 AM Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:This patch is a rather hacky implementation of the basic idea for\nimplementing FETCH ... WITH TIES, and potentially also PERCENT, by using\na window function expression to compute a stopping point.\n\nLarge chunks of this (the parser/ruleutils changes, docs, tests) are\ntaken from Surafel Temesgen's patch. The difference is that the executor\nchange in my version is minimal: Limit allows a boolean column in the\ninput to signal the point at which to stop. The planner inserts a\nWindowAgg node to compute the necessary condition using the rank()\nfunction.\n\nThe way this is done in the planner isn't (IMO) the best and should\nprobably be improved; in particular it currently misses some possible\noptimizations (most notably constant-folding of the offset+limit\nsubexpression). I also haven't tested it properly to see whether I broke\nanything, though it does pass regression.\n\n\nUnlike most other\nexecutor node limit node has implementation for handling backward\nscan that support cursor operation but your approach didn't do this\ninherently because it outsource limitNode functionality to window\nfunction and window function didn't do this \n\neg.\npostgres=# begin;\nBEGIN\npostgres=# declare c\ncursor for select i from generate_series(1,1000000) s(i) order by i\nfetch first 2 rows with ties;\nDECLARE CURSOR\npostgres=# fetch all\nin c;\n i \n\n---\n 1\n 2\n(2 rows)\n\n\npostgres=# fetch\nbackward all in c;\nERROR: cursor can\nonly scan forward\nHINT: Declare it\nwith SCROLL option to enable backward scan.\n\n\nEven with SCROLL\noption it is not working as limitNode does. It store the result and\nreturn in backward scan that use more space than current limit and\nlimit with ties implementation.\n\n\nIf am not mistaken\nthe patch also reevaluate limit every time returning row beside its\nnot good for performance its will return incorrect result with limit\ninvolving volatile function \n\n\n\nregards \n\nSurafel",
"msg_date": "Mon, 6 Jan 2020 11:20:56 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A rather hackish POC for alternative implementation of WITH TIES"
},
{
"msg_contents": ">>>>> \"Surafel\" == Surafel Temesgen <surafel3000@gmail.com> writes:\n\n Surafel> Unlike most other executor node limit node has implementation\n Surafel> for handling backward scan that support cursor operation but\n Surafel> your approach didn't do this inherently because it outsource\n Surafel> limitNode functionality to window function and window function\n Surafel> didn't do this\n\nCorrect. But this is a non-issue: if you want to be able to do backward\nscan you are supposed to declare the cursor as SCROLL; if it happens to\nwork without it, that is pure coincidence. (Cursors declared with neither\nSCROLL nor NO SCROLL support backwards scan only if the underlying plan\nsupports backward scan with no additional overhead, which is something\nyou can't predict from the query.)\n\nThe Limit node declares that it supports backwards scan if, and only if,\nits immediate child node supports it. It happens that WindowAgg does\nnot, so in this implementation, LIMIT ... WITH TIES will not support\nbackward scan without a tuplestore. I don't consider this an especially\nbig deal; backward scans are extremely rare (as shown by the fact that\nbugs in backward scan have tended to go unnoticed for decades, e.g. bug\n#15336), and therefore we should not optimize for them.\n\n Surafel> If am not mistaken the patch also reevaluate limit every time\n\nThe (offset+limit) expression is, yes. I noted in the original post that\nthis needs work - probably it should be pushed out to an InitPlan if it\ndoesn't fold to a constant. i.e. using the expression\n\n rank() over (...) > (select offset+limit)\n\nwhere it currently has\n\n rank() over (...) > (offset+limit)\n\n(Generating the limit expression so late in planning is the main thing\nthat needs changing to get this from a hack POC to usable code)\n\nThe main point here is that the same rather minimal executor changes\nallow support for not only WITH TIES but also PERCENT and possibly\narbitrary stop conditions as well. (I know I've often wanted LIMIT WHEN\nto stop a query at a data-dependent point without having to resort to\nrecursion - this patch doesn't quite get there, because of the scope\nissues involved in analyzing the WHEN condition, but it at least sets up\nthe concept.)\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 07 Jan 2020 23:10:53 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: A rather hackish POC for alternative implementation of WITH TIES"
},
{
"msg_contents": "Hello\n\nAs this is a valuable feature, it would be good to have something happen\nhere. I wouldn't like to have pg13 ship with no implementation of WITH\nTIES at all.\n\nMy own inclination is that Andrew's implementation, being more general\nin nature, would be the better one to have in the codebase; but we don't\nhave a complete patch yet. Can we reach some compromise such as if\nAndrew's patch is not completed then we push Surafel's?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 19:06:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: A rather hackish POC for alternative implementation of WITH TIES"
},
{
"msg_contents": ">>>>> \"Alvaro\" == Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n Alvaro> My own inclination is that Andrew's implementation, being more\n Alvaro> general in nature, would be the better one to have in the\n Alvaro> codebase; but we don't have a complete patch yet. Can we reach\n Alvaro> some compromise such as if Andrew's patch is not completed then\n Alvaro> we push Surafel's?\n\nMine needs some attention to where exactly in planning the necessary\ntransformation work should be done; right now the planner part is a\nhack, intended to demonstrate the idea (and to let the executor changes\nwork) rather than actually be the final version. As I mentioned before,\nsome stuff does need to be pushed out to an InitPlan to make it work\nwithout multiple-evaluation problems.\n\n(A second opinion from another planner expert would be welcome on that\npart)\n\nI was largely holding off on doing further work hoping for some\ndiscussion of which way we should go. If you think my approach is worth\npursuing (I haven't seriously tested the performance, but I'd expect it\nto be slower than Surafel's - the price you pay for flexibility) then I\ncan look at it further, but figuring out the planner stuff will take\nsome time.\n\n-- \nAndrew.\n\n\n",
"msg_date": "Wed, 22 Jan 2020 23:35:13 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: A rather hackish POC for alternative implementation of WITH TIES"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 3:06 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n> My own inclination is that Andrew's implementation, being more general\n> in nature, would be the better one to have in the codebase; but we don't\n> have a complete patch yet. Can we reach some compromise such as if\n> Andrew's patch is not completed then we push Surafel's?\n\n+1\n\nOn Wed, Jan 22, 2020 at 4:35 PM Andrew Gierth <andrew@tao11.riddles.org.uk>\nwrote:\n> I was largely holding off on doing further work hoping for some\n> discussion of which way we should go. If you think my approach is worth\n> pursuing (I haven't seriously tested the performance, but I'd expect it\n> to be slower than Surafel's - the price you pay for flexibility) then I\n> can look at it further, but figuring out the planner stuff will take\n> some time.\n\nFlexibility with more generalized code is good, though if performance is\nsignificantly slower I would be concerned. I quickly reviewed the patch\nbut haven't tested it yet.\n\nIs it realistic to add PERCENT into this patch or would that be a future\nenhancement?\n\nThanks,\n\n*Ryan Lambert*\n\nOn Wed, Jan 22, 2020 at 3:06 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:> My own inclination is that Andrew's implementation, being more general> in nature, would be the better one to have in the codebase; but we don't> have a complete patch yet. Can we reach some compromise such as if> Andrew's patch is not completed then we push Surafel's?+1On Wed, Jan 22, 2020 at 4:35 PM Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:> I was largely holding off on doing further work hoping for some> discussion of which way we should go. If you think my approach is worth> pursuing (I haven't seriously tested the performance, but I'd expect it> to be slower than Surafel's - the price you pay for flexibility) then I> can look at it further, but figuring out the planner stuff will take> some time.Flexibility with more generalized code \n\nis good, though if performance is significantly slower I would be concerned. I quickly reviewed the patch but haven't tested it yet.Is it realistic to add PERCENT into this patch or would that be a future enhancement? Thanks,Ryan Lambert",
"msg_date": "Fri, 24 Jan 2020 19:31:14 -0700",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": false,
"msg_subject": "Re: A rather hackish POC for alternative implementation of WITH TIES"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 3:35 PM Andrew Gierth <andrew@tao11.riddles.org.uk>\nwrote:\n\n> >>>>> \"Alvaro\" == Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>\n>\n> I was largely holding off on doing further work hoping for some\n> discussion of which way we should go. If you think my approach is worth\n> pursuing (I haven't seriously tested the performance, but I'd expect it\n> to be slower than Surafel's - the price you pay for flexibility) then I\n> can look at it further, but figuring out the planner stuff will take\n> some time.\n>\n>\nOther alternative can be pushing the existing implementation\nwhich will be open to change in case of better-finished\nimplementation.\n\nregards\nSurafel\n\nOn Wed, Jan 22, 2020 at 3:35 PM Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:>>>>> \"Alvaro\" == Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\nI was largely holding off on doing further work hoping for some\ndiscussion of which way we should go. If you think my approach is worth\npursuing (I haven't seriously tested the performance, but I'd expect it\nto be slower than Surafel's - the price you pay for flexibility) then I\ncan look at it further, but figuring out the planner stuff will take\nsome time.\nOther alternative can be pushing the existing implementationwhich will be open to change in case of better-finishedimplementation.regards Surafel",
"msg_date": "Thu, 26 Mar 2020 00:22:26 -0700",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A rather hackish POC for alternative implementation of WITH TIES"
},
{
"msg_contents": "On 2020-Mar-26, Surafel Temesgen wrote:\n\n> On Wed, Jan 22, 2020 at 3:35 PM Andrew Gierth <andrew@tao11.riddles.org.uk>\n> wrote:\n> \n> > >>>>> \"Alvaro\" == Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> >\n> > I was largely holding off on doing further work hoping for some\n> > discussion of which way we should go. If you think my approach is worth\n> > pursuing (I haven't seriously tested the performance, but I'd expect it\n> > to be slower than Surafel's - the price you pay for flexibility) then I\n> > can look at it further, but figuring out the planner stuff will take\n> > some time.\n>\n> Other alternative can be pushing the existing implementation\n> which will be open to change in case of better-finished\n> implementation.\n\nAt this point, I think that's what we should do.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Mar 2020 12:24:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: A rather hackish POC for alternative implementation of WITH TIES"
}
] |
[
{
"msg_contents": "Hi all,\n\nSince commit d6c55de1, we support %m in the in-core port for printf\nand such. And it seems to me that we could do better for the frontend\ncode by reducing the dependency to strerror().\n\nOne advantage of doing a switch, or at least reduce the use of\nstrerror(), would be to ease the work of translators with more error\nmessages unified between the frontend and the backend. A possible\ndrawback is that this could be a cause of minor conflicts when\nback-patching. Always easy enough to fix, still that can be \nannoying.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 15:51:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Increase footprint of %m and reduce strerror()"
},
{
"msg_contents": "At Fri, 29 Nov 2019 15:51:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> Since commit d6c55de1, we support %m in the in-core port for printf\n> and such. And it seems to me that we could do better for the frontend\n> code by reducing the dependency to strerror().\n> \n> One advantage of doing a switch, or at least reduce the use of\n> strerror(), would be to ease the work of translators with more error\n> messages unified between the frontend and the backend. A possible\n> drawback is that this could be a cause of minor conflicts when\n> back-patching. Always easy enough to fix, still that can be \n> annoying.\n> \n> Thoughts?\n\nIt sounds good to me. Message unification (including printf) needs\nsomehow treating trailing new lines, though. About translation\nburden, I'm not sure how the message unification eases translators'\nwork. Identical messages of different commands appear having different\nneighbours in different po files.\n\nBy the way aren't we going to have ereport on frontend?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Dec 2019 15:32:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase footprint of %m and reduce strerror()"
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 03:32:11PM +0900, Kyotaro Horiguchi wrote:\n> It sounds good to me. Message unification (including printf) needs\n> somehow treating trailing new lines, though. About translation\n> burden, I'm not sure how the message unification eases translators'\n> work. Identical messages of different commands appear having different\n> neighbours in different po files.\n\nNewlines are a problem. Still there are cases where we don't use\nthem. See for example pg_waldump.c. It seems like it would be first\ninteresting to fix the code paths where we know we can reduce the\nduplicates.\n\n> By the way aren't we going to have ereport on frontend?\n\nNot sure that this will happen, there are quite a few things to\nconsider related to what error hints and such should be for frontends.\nThat's quite a different discussion..\n--\nMichael",
"msg_date": "Thu, 5 Dec 2019 11:36:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Increase footprint of %m and reduce strerror()"
},
{
"msg_contents": "At Thu, 5 Dec 2019 11:36:48 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Dec 04, 2019 at 03:32:11PM +0900, Kyotaro Horiguchi wrote:\n> > It sounds good to me. Message unification (including printf) needs\n> > somehow treating trailing new lines, though. About translation\n> > burden, I'm not sure how the message unification eases translators'\n> > work. Identical messages of different commands appear having different\n> > neighbours in different po files.\n> \n> Newlines are a problem. Still there are cases where we don't use\n> them. See for example pg_waldump.c. It seems like it would be first\n> interesting to fix the code paths where we know we can reduce the\n> duplicates.\n\nSo, (IIUC) do we replace fprintf()s for error reporting together (but\nmaybe in a separate patch)?\n\n> > By the way aren't we going to have ereport on frontend?\n> \n> Not sure that this will happen, there are quite a few things to\n> consider related to what error hints and such should be for frontends.\n> That's quite a different discussion..\n\nAgreed.\n\n+1 for going that way after having above considerations.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Dec 2019 12:06:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase footprint of %m and reduce strerror()"
},
{
"msg_contents": "(Just to clarifying the last mail..)\n\nAt Thu, 05 Dec 2019 12:06:54 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 5 Dec 2019 11:36:48 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Wed, Dec 04, 2019 at 03:32:11PM +0900, Kyotaro Horiguchi wrote:\n> > > It sounds good to me. Message unification (including printf) needs\n> > > somehow treating trailing new lines, though. About translation\n> > > burden, I'm not sure how the message unification eases translators'\n> > > work. Identical messages of different commands appear having different\n> > > neighbours in different po files.\n> > \n> > Newlines are a problem. Still there are cases where we don't use\n> > them. See for example pg_waldump.c. It seems like it would be first\n> > interesting to fix the code paths where we know we can reduce the\n> > duplicates.\n> \n> So, (IIUC) do we replace fprintf()s for error reporting together (but\n> maybe in a separate patch)?\n> \n> > > By the way aren't we going to have ereport on frontend?\n> > \n> > Not sure that this will happen, there are quite a few things to\n> > consider related to what error hints and such should be for frontends.\n> > That's quite a different discussion..\n> \n> Agreed.\n> \n> +1 for going that way after having above considerations.\n\n(This might be took wrongly. The following would be clearer.)\n\nSince I see the above considertaions, I put +1 for this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Dec 2019 12:29:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase footprint of %m and reduce strerror()"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 12:29:29PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 05 Dec 2019 12:06:54 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> So, (IIUC) do we replace fprintf()s for error reporting together (but\n>> maybe in a separate patch)?\n\nI guess that we should do that at the end of the day. A lookup at the\nin-core tools I see three areas which stand out compared to the rest:\n- pg_waldump, and attached is a patch for it.\n- pgbench. However for this one we also have some status messages\nshowing up in stderr output, and the TAP tests have dependencies with\nthe output generated. This part is not plugged into the generic\nlogging facility yet, and we have 162 places where fprintf/stderr is\nused, so that's kind of messy.\n- pg_standby. For this one, we may actually be closer to just remove\nit from the tree :)\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 14:09:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Increase footprint of %m and reduce strerror()"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 02:09:05PM +0900, Michael Paquier wrote:\n> I guess that we should do that at the end of the day. A lookup at the\n> in-core tools I see three areas which stand out compared to the rest:\n> - pg_waldump, and attached is a patch for it.\n\nOkay, I have committed this one.\n--\nMichael",
"msg_date": "Tue, 24 Dec 2019 12:22:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Increase footprint of %m and reduce strerror()"
}
] |
[
{
"msg_contents": "I propose to change the default of ssl_min_protocol_version to TLSv1.2 \n(from TLSv1, which means 1.0). Older versions would still be supported, \njust not by default.\n\nThe reason is that TLS 1.0 and 1.1 are either already discouraged or \ndeprecated or will be by the time PostgreSQL 13 comes out. So this move \nwould be in the direction of \"secure by default\". Specifically, PCI DSS \ndisallows the use of TLS 1.0 and discourages 1.1 [0], and browser \nvendors are set to disable 1.0 and 1.1 in their products sometime soon [1].\n\nUsing TLS 1.2 requires OpenSSL 1.0.1, released in 2012. I find this to \nbe satisfied in CentOS 6 and Debian jessie (oldoldstable), for example.\n\nMore details also in my recent blog post [2].\n\n\n[0]: \nhttps://blog.pcisecuritystandards.org/are-you-ready-for-30-june-2018-sayin-goodbye-to-ssl-early-tls\n[1]: \nhttps://arstechnica.com/gadgets/2018/10/browser-vendors-unite-to-end-support-for-20-year-old-tls-1-0/\n[2]: \nhttps://www.2ndquadrant.com/en/blog/setting-ssl-tls-protocol-versions-with-postgresql-12/\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 29 Nov 2019 08:36:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Update minimum SSL version"
},
{
"msg_contents": "> On 29 Nov 2019, at 08:36, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> I propose to change the default of ssl_min_protocol_version to TLSv1.2 (from TLSv1, which means 1.0). Older versions would still be supported, just not by default.\n\n+1 for having a sane default with a way to fall back to older versions in case\nthey are required.\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 29 Nov 2019 11:10:32 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 11:10 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 29 Nov 2019, at 08:36, Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > I propose to change the default of ssl_min_protocol_version to TLSv1.2\n> (from TLSv1, which means 1.0). Older versions would still be supported,\n> just not by default.\n>\n> +1 for having a sane default with a way to fall back to older versions in\n> case\n> they are required.\n>\n\n+1. As long as we still have support to change it down if needed, it's a\ngood thing to ship with a proper default.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Nov 29, 2019 at 11:10 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 29 Nov 2019, at 08:36, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> I propose to change the default of ssl_min_protocol_version to TLSv1.2 (from TLSv1, which means 1.0). Older versions would still be supported, just not by default.\n\n+1 for having a sane default with a way to fall back to older versions in case\nthey are required.+1. As long as we still have support to change it down if needed, it's a good thing to ship with a proper default. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 29 Nov 2019 13:40:48 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 01:40:48PM +0100, Magnus Hagander wrote:\n> +1. As long as we still have support to change it down if needed, it's a\n> good thing to ship with a proper default.\n\n+1.\n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 21:57:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Nov 29, 2019 at 01:40:48PM +0100, Magnus Hagander wrote:\n>> +1. As long as we still have support to change it down if needed, it's a\n>> good thing to ship with a proper default.\n\n> +1.\n\nWhat's the impact going to be on buildfarm members with older openssl\ninstallations? Perhaps \"none\", if they aren't running the ssl test\nsuite, but we should be clear about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Nov 2019 10:30:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 10:30:47AM -0500, Tom Lane wrote:\n> What's the impact going to be on buildfarm members with older openssl\n> installations? Perhaps \"none\", if they aren't running the ssl test\n> suite, but we should be clear about it.\n\nThe buildfarm logs don't directly report the version of OpenSSL used\nas far as I recalled, and a quick lookup shows that.. Anyway, I\nrecall that all Windows buildfarm members linking to OpenSSL use at\nleast 1.0.2 on HEAD. For the others, I would be ready to suspect that\nsome of them are still using 0.9.8 and 1.0.0.\n\nAnyway, as we still support OpenSSL down to 0.9.8 on HEAD, shouldn't\nwe just patch the SSL TAP tests to make sure that we don't enforce an\nincorrect minimum version at configuration time?\n\n[... thinks more ...]\n\nActually, no, what I am writing here is incorrect. We should make\nsure of that the default configuration is correct at initdb time, and\nthe patch does not do that.\n--\nMichael",
"msg_date": "Sat, 30 Nov 2019 11:43:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Nov 29, 2019 at 10:30:47AM -0500, Tom Lane wrote:\n>> What's the impact going to be on buildfarm members with older openssl\n>> installations? Perhaps \"none\", if they aren't running the ssl test\n>> suite, but we should be clear about it.\n\n> Actually, no, what I am writing here is incorrect. We should make\n> sure of that the default configuration is correct at initdb time, and\n> the patch does not do that.\n\nYeah, that's sort of what I was getting at, but not quite. On newer\nopenssl versions, this doesn't seem like it's really changing anything\nat all --- AFAIK, the client and server will already negotiate the\nhighest jointly-supported TLS version. OTOH, with an openssl version\nold enough to not understand TLS >= 1.2, this change likewise won't do\nanything, except break configurations that used to work (for some\nnot-too-secure value of \"work\").\n\nI think the real question we have to answer is this: are we intent on\nmaking people upgrade ancient openssl installations? If so, shouldn't\nwe be doing something even more aggressive than this? If not, wouldn't\nthe patch need to try to autoconfigure the minimum TLS version? As\nproposed, the patch seems to be somewhere in a passive-aggressive middle\nground of being annoying without really enforcing anything. So I don't\nquite see the point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Nov 2019 22:06:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On 2019-11-29 16:30, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Fri, Nov 29, 2019 at 01:40:48PM +0100, Magnus Hagander wrote:\n>>> +1. As long as we still have support to change it down if needed, it's a\n>>> good thing to ship with a proper default.\n> \n>> +1.\n> \n> What's the impact going to be on buildfarm members with older openssl\n> installations? Perhaps \"none\", if they aren't running the ssl test\n> suite, but we should be clear about it.\n\nIf they aren't running the ssl tests, then none.\n\nWe could add an override of ssl_min_protocol_version in \nsrc/test/ssl/t/SSLServer.pm so that the tests still work with very old \nOpenSSL versions by default. That might be a good idea.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Dec 2019 11:39:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On 2019-11-30 04:06, Tom Lane wrote:\n> I think the real question we have to answer is this: are we intent on\n> making people upgrade ancient openssl installations? If so, shouldn't\n> we be doing something even more aggressive than this? If not, wouldn't\n> the patch need to try to autoconfigure the minimum TLS version? As\n> proposed, the patch seems to be somewhere in a passive-aggressive middle\n> ground of being annoying without really enforcing anything. So I don't\n> quite see the point.\n\nThe trade-off is that this makes the defaults better for the vast \nmajority of users and gives users of really old systems a nudge that \nthey are no longer in compliance with industry best practices. You need \nmanual steps to set up SSL anyway, so this doesn't introduce an entirely \nnew kind of requirement for the latter group of users.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Dec 2019 11:56:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "> On 30 Nov 2019, at 03:43, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Nov 29, 2019 at 10:30:47AM -0500, Tom Lane wrote:\n>> What's the impact going to be on buildfarm members with older openssl\n>> installations? Perhaps \"none\", if they aren't running the ssl test\n>> suite, but we should be clear about it.\n> \n> The buildfarm logs don't directly report the version of OpenSSL used\n> as far as I recalled, and a quick lookup shows that..\n\nNot explicitly, but it would be a nice if it did. Since the version depends on\nthe optional FIPS module, running \"openssl version\" is really the safe option,\nwhich in itself is hard since the libraries pointed to with --with-libs aren't\nguaranteed to have an openssl command installed etc. OpenSSL might also these\ndays be LibreSSL (or potentially even BoringSSL perhaps if someone twists the\narm of their installation enough).\n\nHowever, looking at the signatures detected by autoconf we can however get an\nidea of which version is used. SSL_clear_options and X509_get_signature_nid()\nfirst shipped in 1.0.2, while SSL_get_current_compression first shipped in\n0.9.8. There are also a set of functions which are new in 1.1.0 (BIO_get_data\net.al).\n\nThis tells us that for example alewife is likely running 1.0.2:\n\nchecking for SSL_new in -lssl... (cached) yes\nchecking for SSL_clear_options... (cached) no\nchecking for SSL_get_current_compression... (cached) yes\nchecking for X509_get_signature_nid... (cached) yes\nchecking for OPENSSL_init_ssl... (cached) no\nchecking for BIO_get_data... (cached) no\nchecking for BIO_meth_new... (cached) no\nchecking for ASN1_STRING_get0_data... (cached) no\n\n(the careful observer notes that the SSL_clear_options() check fails even\nthough it should be in 1.0.2, and thats probably because SSL_clear_options is a\nmacro until 1.1.0 where it becomes a function).\n\ngaur however looks like it is running 0.9.8:\n\nchecking for SSL_new in -lssl... yes\nchecking for SSL_clear_options... no\nchecking for SSL_get_current_compression... yes\nchecking for X509_get_signature_nid... no\nchecking for OPENSSL_init_ssl... no\nchecking for BIO_get_data... no\nchecking for BIO_meth_new... no\nchecking for ASN1_STRING_get0_data... no\nchecking for CRYPTO_lock... yes\n\nscorpionfly running OpenBSD 6.6 configures as a LibreSSL on par with what we\nexpect for 1.1.0 (SSL_clear_options again fail here since it's still a macro in\nLibreSSL):\n\nchecking for SSL_new in -lssl... (cached) yes\nchecking for SSL_clear_options... (cached) no\nchecking for SSL_get_current_compression... (cached) yes\nchecking for X509_get_signature_nid... (cached) yes\nchecking for OPENSSL_init_ssl... (cached) yes\nchecking for BIO_get_data... (cached) yes\nchecking for BIO_meth_new... (cached) yes\nchecking for ASN1_STRING_get0_data... (cached) yes\nchecking for CRYPTO_lock... (cached) yes\n\nRandomly picking animals, and trying to target platforms where older versions\ncould be expected, I didn't see any <= 0.9.7; a small number 0.9.8 and most at\n1.0.2 or higher (with the 0.9.8 animals being: gaur, sungazer and prairiedog).\nThis is not an exhaustive list of course, maybe someone with better access to\nthe buildfarm data can do some more clever analysis.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 2 Dec 2019 14:09:51 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 30 Nov 2019, at 03:43, Michael Paquier <michael@paquier.xyz> wrote:\n>> The buildfarm logs don't directly report the version of OpenSSL used\n>> as far as I recalled, and a quick lookup shows that..\n\n> Not explicitly, but it would be a nice if it did. Since the version depends on\n> the optional FIPS module, running \"openssl version\" is really the safe option,\n> which in itself is hard since the libraries pointed to with --with-libs aren't\n> guaranteed to have an openssl command installed etc. OpenSSL might also these\n> days be LibreSSL (or potentially even BoringSSL perhaps if someone twists the\n> arm of their installation enough).\n\nYeah, I do not think that would be a good solution --- it would give wrong\nanswers on three of my four buildfarm animals :-(, for precisely the\nreason that they're using --with-libs to point to a non-system openssl\ninstallation.\n\nIs there a simple way to ask the library itself for version info?\nIt might be worth the cycles to have configure run a small test\nprogram to extract and print that data (not on cross-compile\nbuilds, of course).\n\n> (the careful observer notes that the SSL_clear_options() check fails even\n> though it should be in 1.0.2, and thats probably because SSL_clear_options is a\n> macro until 1.1.0 where it becomes a function).\n\nHmm, is it worth the trouble to fix that?\n\n> gaur however looks like it is running 0.9.8:\n\ngaur and prairiedog are both building with 0.9.8x, as you can tell\nfrom their --with-libs options.\n\n> Randomly picking animals, and trying to target platforms where older versions\n> could be expected, I didn't see any <= 0.9.7; a small number 0.9.8 and most at\n> 1.0.2 or higher (with the 0.9.8 animals being: gaur, sungazer and prairiedog).\n\nAccording to the commit log (see 593d4e47d), we require 0.9.8 or later\nin v10 and up, so any older animals got upgraded or retired some time\nago.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 09:59:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-11-30 04:06, Tom Lane wrote:\n>> I think the real question we have to answer is this: are we intent on\n>> making people upgrade ancient openssl installations?\n\n> The trade-off is that this makes the defaults better for the vast \n> majority of users and gives users of really old systems a nudge that \n> they are no longer in compliance with industry best practices. You need \n> manual steps to set up SSL anyway, so this doesn't introduce an entirely \n> new kind of requirement for the latter group of users.\n\nTrue. I'm okay with this as long as we adapt the ssl test suite as\nper your other reply.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 10:13:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "> On 2 Dec 2019, at 15:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Is there a simple way to ask the library itself for version info?\n> It might be worth the cycles to have configure run a small test\n> program to extract and print that data (not on cross-compile\n> builds, of course).\n\nAsking the lib is easy, making that fit cleanly into how autoconf does things might be trickier. I’ll take a look and will report back (on the SSL_clear_options thing as well).\n\ncheers ./daniel\n\n\n",
"msg_date": "Mon, 2 Dec 2019 16:28:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 9:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Actually, no, what I am writing here is incorrect. We should make\n> sure of that the default configuration is correct at initdb time, and\n> the patch does not do that.\n\nI think that would be overkill. There shouldn't be many people who are\nrunning with a version of PostgreSQL that is 8 years newer than the\nversion of OpenSSL they are using, and who are also relying on SSL,\nand even if there are such people, it's a pretty minor configuration\nchange to make it work. However, it would be worth putting in some\neffort to make sure that we give a good error message if this happens.\nI'm not sure how practical that is. But there's a big difference\nbetween giving an incomprehensible OpenSSL message that says \"things\naren't working and good luck figuring out why\" and giving a message\nthat says something like:\n\nERROR: ssl_min_protocol_version specifies TLSv1.2, but your OpenSSL\nlibrary does not support protocol versions beyond TLSv1.1\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 Dec 2019 10:31:40 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... However, it would be worth putting in some\n> effort to make sure that we give a good error message if this happens.\n\nThat's an excellent point, but it looks like we're pretty good\nalready. I tried the patch with openssl 0.9.8x, and got this\nfailure at server start:\n\nFATAL: ssl_min_protocol_version setting TLSv1.2 not supported by this build\n\nMaybe it'd be worth extending that to show the max supported\nversion, with some rats-nest of #ifdefs, but I'm not sure if\nit's worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 11:39:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Mon, Dec 2, 2019 at 11:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That's an excellent point, but it looks like we're pretty good\n> already. I tried the patch with openssl 0.9.8x, and got this\n> failure at server start:\n>\n> FATAL: ssl_min_protocol_version setting TLSv1.2 not supported by this build\n\nOh, that's pretty good.\n\n> Maybe it'd be worth extending that to show the max supported\n> version, with some rats-nest of #ifdefs, but I'm not sure if\n> it's worth the trouble.\n\nEspecially if we mess up the #ifdefs. :-)\n\nI don't have super-strong feelings that we have to try to do that. It\nwould be worth doing if it were easy, I think, but if our hypothesis\nthat this will affect relatively few people is correct, it may not\nmatter very much.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 Dec 2019 12:40:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Dec 2, 2019 at 11:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Maybe it'd be worth extending that to show the max supported\n>> version, with some rats-nest of #ifdefs, but I'm not sure if\n>> it's worth the trouble.\n\n> Especially if we mess up the #ifdefs. :-)\n\nYah. Although, looking at the code in be-secure-openssl.c,\nit doesn't look that hard to do in an extensible way.\nSomething like (untested)\n\n static int\n ssl_protocol_version_to_openssl(int v, const char *guc_name, int loglevel)\n {\n switch (v)\n {\n case PG_TLS_ANY:\n return 0;\n case PG_TLS1_VERSION:\n+#define PG_MAX_TLS_VERSION \"TLSv1\"\n return TLS1_VERSION;\n case PG_TLS1_1_VERSION:\n #ifdef TLS1_1_VERSION\n+#undef PG_MAX_TLS_VERSION\n+#define PG_MAX_TLS_VERSION \"TLSv1.1\"\n return TLS1_1_VERSION;\n #else\n break;\n #endif\n case PG_TLS1_2_VERSION:\n #ifdef TLS1_2_VERSION\n+#undef PG_MAX_TLS_VERSION\n+#define PG_MAX_TLS_VERSION \"TLSv1.2\"\n return TLS1_2_VERSION;\n #else\n break;\n #endif\n case PG_TLS1_3_VERSION:\n #ifdef TLS1_3_VERSION\n+#undef PG_MAX_TLS_VERSION\n+#define PG_MAX_TLS_VERSION \"TLSv1.3\"\n return TLS1_3_VERSION;\n #else\n break;\n #endif\n }\n \n ereport(loglevel,\n (errmsg(\"%s setting %s not supported by this build\",\n guc_name,\n- GetConfigOption(guc_name, false, false))));\n+ GetConfigOption(guc_name, false, false)),\n+ errdetail(\"Maximum supported TLS version is %s.\",\n+ PG_MAX_TLS_VERSION)));\n return -1;\n }\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 12:51:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 09:59:44AM -0500, Tom Lane wrote:\n> Is there a simple way to ask the library itself for version info?\n> It might be worth the cycles to have configure run a small test\n> program to extract and print that data (not on cross-compile\n> builds, of course).\n\nSSLeay_version():\nhttps://www.openssl.org/docs/man1.0.2/man3/SSLeay_version.html\n--\nMichael",
"msg_date": "Tue, 3 Dec 2019 12:47:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 12:51:26PM -0500, Tom Lane wrote:\n> Yah. Although, looking at the code in be-secure-openssl.c,\n> it doesn't look that hard to do in an extensible way.\n> Something like (untested)\n\nWhile we are on the topic... Here is another wild idea. We discussed\nnot so long ago about removing support for OpenSSL 0.9.8 from the\ntree. What if we removed support for 1.0.0 and 0.9.8 for 13~. This\nwould solve a couple of compatibility headaches, and we have TLSv1.2\nsupport automatically for all the versions supported. Note that 1.0.0\nhas been retired by upstream in February 2014.\n--\nMichael",
"msg_date": "Tue, 3 Dec 2019 12:53:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 4:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Dec 02, 2019 at 12:51:26PM -0500, Tom Lane wrote:\n> > Yah. Although, looking at the code in be-secure-openssl.c,\n> > it doesn't look that hard to do in an extensible way.\n> > Something like (untested)\n>\n> While we are on the topic... Here is another wild idea. We discussed\n> not so long ago about removing support for OpenSSL 0.9.8 from the\n> tree. What if we removed support for 1.0.0 and 0.9.8 for 13~. This\n> would solve a couple of compatibility headaches, and we have TLSv1.2\n> support automatically for all the versions supported. Note that 1.0.0\n> has been retired by upstream in February 2014.\n>\n\nIs 1.0.1 considered a separate major from 1.0.0, in this reasoning? Because\nwhile retiring 1.0.0 should probably not be that terrible, 1.0.1 is still\nin very widespread use on most long term supported distributions.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 3, 2019 at 4:53 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Dec 02, 2019 at 12:51:26PM -0500, Tom Lane wrote:\n> Yah. Although, looking at the code in be-secure-openssl.c,\n> it doesn't look that hard to do in an extensible way.\n> Something like (untested)\n\nWhile we are on the topic... Here is another wild idea. We discussed\nnot so long ago about removing support for OpenSSL 0.9.8 from the\ntree. What if we removed support for 1.0.0 and 0.9.8 for 13~. This\nwould solve a couple of compatibility headaches, and we have TLSv1.2\nsupport automatically for all the versions supported. Note that 1.0.0\nhas been retired by upstream in February 2014.Is 1.0.1 considered a separate major from 1.0.0, in this reasoning? Because while retiring 1.0.0 should probably not be that terrible, 1.0.1 is still in very widespread use on most long term supported distributions.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 3 Dec 2019 10:10:57 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Tue, Dec 03, 2019 at 10:10:57AM +0100, Magnus Hagander wrote:\n> Is 1.0.1 considered a separate major from 1.0.0, in this reasoning? Because\n> while retiring 1.0.0 should probably not be that terrible, 1.0.1 is still\n> in very widespread use on most long term supported distributions.\n\n1.0.1 and 1.0.0 are two different major releases in the OpenSSL world,\nso my suggestion would be to cut support for everything which does not\nhave TLSv1.2, meaning that we keep compatibility with 1.0.1 for\na longer period.\n--\nMichael",
"msg_date": "Tue, 3 Dec 2019 20:08:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 12:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Dec 03, 2019 at 10:10:57AM +0100, Magnus Hagander wrote:\n> > Is 1.0.1 considered a separate major from 1.0.0, in this reasoning?\n> Because\n> > while retiring 1.0.0 should probably not be that terrible, 1.0.1 is still\n> > in very widespread use on most long term supported distributions.\n>\n> 1.0.1 and 1.0.0 are two different major releases in the OpenSSL world,\n> so my suggestion would be to cut support for everything which does not\n> have TLSv1.2, meaning that we keep compatibility with 1.0.1 for\n> a longer period.\n>\n\nGood, that's what I thought you meant :) And that makes it sound like a\nworking plan to me.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 3, 2019 at 12:09 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Dec 03, 2019 at 10:10:57AM +0100, Magnus Hagander wrote:\n> Is 1.0.1 considered a separate major from 1.0.0, in this reasoning? Because\n> while retiring 1.0.0 should probably not be that terrible, 1.0.1 is still\n> in very widespread use on most long term supported distributions.\n\n1.0.1 and 1.0.0 are two different major releases in the OpenSSL world,\nso my suggestion would be to cut support for everything which does not\nhave TLSv1.2, meaning that we keep compatibility with 1.0.1 for\na longer period.Good, that's what I thought you meant :) And that makes it sound like a working plan to me. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 3 Dec 2019 12:44:32 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On 2019-12-02 17:39, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> ... However, it would be worth putting in some\n>> effort to make sure that we give a good error message if this happens.\n> \n> That's an excellent point, but it looks like we're pretty good\n> already. I tried the patch with openssl 0.9.8x, and got this\n> failure at server start:\n> \n> FATAL: ssl_min_protocol_version setting TLSv1.2 not supported by this build\n\nThat's the easy part, since it's under our control. The other situation \nis if you connect with an old library to a newer server that has the \nraised ssl_min_protocol_version setting. Then you get something like this:\n\npsql: SSL error: tlsv1 alert protocol version\n\nand on the server:\n\nLOG: could not accept SSL connection: unsupported protocol\n\nNot great, but usable.\n\n(What actually happens due to the default of PGSSLMODE=prefer is that \npsql/libpq will have the SSL connection attempt rejected and will \nconnect using a non-SSL connection.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 08:52:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On 2019-12-03 12:44, Magnus Hagander wrote:\n> On Tue, Dec 3, 2019 at 12:09 PM Michael Paquier <michael@paquier.xyz \n> <mailto:michael@paquier.xyz>> wrote:\n> \n> On Tue, Dec 03, 2019 at 10:10:57AM +0100, Magnus Hagander wrote:\n> > Is 1.0.1 considered a separate major from 1.0.0, in this\n> reasoning? Because\n> > while retiring 1.0.0 should probably not be that terrible, 1.0.1\n> is still\n> > in very widespread use on most long term supported distributions.\n> \n> 1.0.1 and 1.0.0 are two different major releases in the OpenSSL world,\n> so my suggestion would be to cut support for everything which does not\n> have TLSv1.2, meaning that we keep compatibility with 1.0.1 for\n> a longer period.\n> \n> \n> Good, that's what I thought you meant :) And that makes it sound like a \n> working plan to me.\n\nThis would mean we'd stop support for RHEL 5, which is probably OK, \nseeing that even the super-extended support ends in November 2020.\n\nDropping RHEL 5 would also allow us to drop support for Python 2.4, \nwhich is something I've been itching to do. ;-)\n\nIn both of these cases, maintaining support for all these ancient \nversions is a significant burden IMO, so it would be good to clean up \nthe tail end a bit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 09:10:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 09:10:04AM +0100, Peter Eisentraut wrote:\n> This would mean we'd stop support for RHEL 5, which is probably OK, seeing\n> that even the super-extended support ends in November 2020.\n\nSounds like a plan. I can work on the OpenSSL part, if you need help\nof course. And if others don't object in doing that. Of course.\n\n> Dropping RHEL 5 would also allow us to drop support for Python 2.4, which is\n> something I've been itching to do. ;-)\n> \n> In both of these cases, maintaining support for all these ancient versions\n> is a significant burden IMO, so it would be good to clean up the tail end a\n> bit.\n\nGood to know.\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 17:20:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-12-03 12:44, Magnus Hagander wrote:\n>> On Tue, Dec 3, 2019 at 12:09 PM Michael Paquier <michael@paquier.xyz \n>> <mailto:michael@paquier.xyz>> wrote:\n>> On Tue, Dec 03, 2019 at 10:10:57AM +0100, Magnus Hagander wrote:\n>>> Is 1.0.1 considered a separate major from 1.0.0, in this reasoning? Because\n>>> while retiring 1.0.0 should probably not be that terrible, 1.0.1\n>>> is still in very widespread use on most long term supported distributions.\n\n> This would mean we'd stop support for RHEL 5, which is probably OK, \n> seeing that even the super-extended support ends in November 2020.\n\n> Dropping RHEL 5 would also allow us to drop support for Python 2.4, \n> which is something I've been itching to do. ;-)\n\n> In both of these cases, maintaining support for all these ancient \n> versions is a significant burden IMO, so it would be good to clean up \n> the tail end a bit.\n\nSo, what exactly are we going to set as the new minimum version in\neach case? I'll have to go update my trailing-edge-Johnnie buildfarm\ncritters, and it'd make sense to have them continue to test the\noldest nominally-supported versions.\n\nFor OpenSSL it seems like 1.0.1a is the target, per the above\ndiscussion.\n\nFor Python, I'll just observe that RHEL6 ships 2.6.6, so we can't\nbump up to 2.7.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 07:53:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On 2019-12-04 13:53, Tom Lane wrote:\n> So, what exactly are we going to set as the new minimum version in\n> each case? I'll have to go update my trailing-edge-Johnnie buildfarm\n> critters, and it'd make sense to have them continue to test the\n> oldest nominally-supported versions.\n> \n> For OpenSSL it seems like 1.0.1a is the target, per the above\n> discussion.\n> \n> For Python, I'll just observe that RHEL6 ships 2.6.6, so we can't\n> bump up to 2.7.\n\nYes, it would be Python 2.6.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 16:24:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On 2019-12-04 09:20, Michael Paquier wrote:\n> On Wed, Dec 04, 2019 at 09:10:04AM +0100, Peter Eisentraut wrote:\n>> This would mean we'd stop support for RHEL 5, which is probably OK, seeing\n>> that even the super-extended support ends in November 2020.\n> \n> Sounds like a plan. I can work on the OpenSSL part, if you need help\n> of course. And if others don't object in doing that. Of course.\n\nPlease go ahead and propose a patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 16:24:37 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On 2019-12-02 16:13, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2019-11-30 04:06, Tom Lane wrote:\n>>> I think the real question we have to answer is this: are we intent on\n>>> making people upgrade ancient openssl installations?\n> \n>> The trade-off is that this makes the defaults better for the vast\n>> majority of users and gives users of really old systems a nudge that\n>> they are no longer in compliance with industry best practices. You need\n>> manual steps to set up SSL anyway, so this doesn't introduce an entirely\n>> new kind of requirement for the latter group of users.\n> \n> True. I'm okay with this as long as we adapt the ssl test suite as\n> per your other reply.\n\nI have committed this with that change. The discussion on which OpenSSL \nversions to support and how will continue.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 23:58:22 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 02:09:51PM +0100, Daniel Gustafsson wrote:\n> However, looking at the signatures detected by autoconf we can however get an\n> idea of which version is used. SSL_clear_options and X509_get_signature_nid()\n> first shipped in 1.0.2, while SSL_get_current_compression first shipped in\n> 0.9.8. There are also a set of functions which are new in 1.1.0 (BIO_get_data\n> et.al).\n\nI was just looking at this problem, and something does not match with\nwhat you wrote here. SSL_clear_options() is defined in OpenSSL from\n0.9.8 to 1.0.2 as a macro (see ssl/ssl.h), and is defined as a\nfunction since 1.1.0. So it seems to me that we are able to correctly\ndetect the presence of this function in the configure checks if\nbuilding with 1.1.0~, but not other versions.\n\nIn LibreSSL, the code has visibly always used a macro, even on their\nlatest HEAD since the code has been forked from OpenSSL 1.0.1g:\nhttps://github.com/libressl-portable/openbsd. So we should be able\nto compile our code, still we fail to detect that we can use the\nmacro. \n\nIt seems to me that we have quite a couple of arguments in favor of\ndropping this configure check all together. (I saw the business\naround a364dfa as well regarding NetBSD 5.1).\n\nWe can do more cleanup, and the discussion is quite different than the\noriginal intent of this thread, so I am going to create a new one on\nthe matter.\n--\nMichael",
"msg_date": "Thu, 5 Dec 2019 10:48:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-12-04 13:53, Tom Lane wrote:\n>> So, what exactly are we going to set as the new minimum version in\n>> each case? I'll have to go update my trailing-edge-Johnnie buildfarm\n>> critters, and it'd make sense to have them continue to test the\n>> oldest nominally-supported versions.\n>> \n>> For OpenSSL it seems like 1.0.1a is the target, per the above\n>> discussion.\n>> \n>> For Python, I'll just observe that RHEL6 ships 2.6.6, so we can't\n>> bump up to 2.7.\n\n> Yes, it would be Python 2.6.\n\nSo the upshot, after a fair amount of hair-pulling, is\n\n* Somebody maybe should be testing openssl 1.0.1, but it won't be\nme, because neither 1.0.1 nor 1.0.1a will even build on non-Intel\nplatforms. After closer study of their release notes, I've settled\non 1.0.1e as being the best compromise between being old and not\nhaving unreasonable teething pains. (I wonder how coincidental\nit is that that's also what Red Hat is now shipping in RHEL6.)\nI've successfully installed 1.0.1e on prairiedog and gaur, so\nI can flip them to start building HEAD with that whenever we\nbreak compatibility with 0.9.8.\n\n* Python 2.6.x also suffered from an unreasonable amount of\nteething pains --- 2.6.2 is the oldest version that seems\nto know how to build a shared library on Darwin. I've now\ngot a reasonably functional 2.6 on gaur and 2.6.2 on prairiedog,\nand again will adjust those buildfarm members to use those\ninstallations when/if our support for their current versions\ngoes away.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 20:50:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "> On 5 Dec 2019, at 02:48, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Dec 02, 2019 at 02:09:51PM +0100, Daniel Gustafsson wrote:\n>> However, looking at the signatures detected by autoconf we can however get an\n>> idea of which version is used. SSL_clear_options and X509_get_signature_nid()\n>> first shipped in 1.0.2, while SSL_get_current_compression first shipped in\n>> 0.9.8. There are also a set of functions which are new in 1.1.0 (BIO_get_data\n>> et.al).\n> \n> I was just looking at this problem, and something does not match with\n> what you wrote here. SSL_clear_options() is defined in OpenSSL from\n> 0.9.8 to 1.0.2 as a macro (see ssl/ssl.h), and is defined as a\n> function since 1.1.0.\n\nYes, I confused myself regarding the version for SSL_clear_options, except for\nwhen it turned into a function.\n\n> So it seems to me that we are able to correctly\n> detect the presence of this function in the configure checks if\n> building with 1.1.0~, but not other versions.\n> \n> In LibreSSL, the code has visibly always used a macro, even on their\n> latest HEAD since the code has been forked from OpenSSL 1.0.1g:\n> https://github.com/libressl-portable/openbsd. So we should be able\n> to compile our code, still we fail to detect that we can use the\n> macro. \n\nYes, we can't use AC_CHECK_FUNCS but would need to use AC_COMPILE_IFELSE (or a\nsimilar check) in order to detect the macro.\n\n> It seems to me that we have quite a couple of arguments in favor of\n> dropping this configure check all together. (I saw the business\n> around a364dfa as well regarding NetBSD 5.1).\n> \n> We can do more cleanup, and the discussion is quite different than the\n> original intent of this thread, so I am going to create a new one on\n> the matter.\n\nYes, if we're dropping older versions such that all supported versions have the\nfunction, then keeping the autoconf check would be quite pointless.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 5 Dec 2019 10:03:27 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 5 Dec 2019, at 02:48, Michael Paquier <michael@paquier.xyz> wrote:\n>> So it seems to me that we are able to correctly\n>> detect the presence of this function in the configure checks if\n>> building with 1.1.0~, but not other versions.\n\n> Yes, we can't use AC_CHECK_FUNCS but would need to use AC_COMPILE_IFELSE (or a\n> similar check) in order to detect the macro.\n\nconfigure already has a similar issue for isinf(). (I thought there\nwere more cases, actually, but I don't see another right now.)\nWe could just duplicate that logic, or maybe it's time to wrap it\nup in an autoconf macro?\n\n> Yes, if we're dropping older versions such that all supported versions have the\n> function, then keeping the autoconf check would be quite pointless.\n\nTrue as far as HEAD goes. What I'd like to know is whether not\nrealizing that SSL_clear_options is present causes any functional\nissues that would justify back-patching a fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Dec 2019 09:50:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "> On 5 Dec 2019, at 15:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 5 Dec 2019, at 02:48, Michael Paquier <michael@paquier.xyz> wrote:\n>>> So it seems to me that we are able to correctly\n>>> detect the presence of this function in the configure checks if\n>>> building with 1.1.0~, but not other versions.\n> \n>> Yes, we can't use AC_CHECK_FUNCS but would need to use AC_COMPILE_IFELSE (or a\n>> similar check) in order to detect the macro.\n> \n> configure already has a similar issue for isinf(). (I thought there\n> were more cases, actually, but I don't see another right now.)\n> We could just duplicate that logic, or maybe it's time to wrap it\n> up in an autoconf macro?\n> \n>> Yes, if we're dropping older versions such that all supported versions have the\n>> function, then keeping the autoconf check would be quite pointless.\n> \n> True as far as HEAD goes. \n\nGood point.\n\n> What I'd like to know is whether not\n> realizing that SSL_clear_options is present causes any functional\n> issues that would justify back-patching a fix.\n\nISTM that SSL_clear_options is required for turning on compression. Since\ncompression was introduced in 1.0.0 and SSL_clear_options was turned into a\nfunction in 1.1.0, it affects 1.0.0, 1.0.1 and 1.0.2 with the latter two being\nquite heavily used. I'm not sure how common it is to enable compression, and\nespecially how common it is post-CRIME, but since the option is there it seems\nsilly for it not to work with highly common library versions. Removing the\ncheck only affects NetBSD 5, but breaking compilation in a stable release, even\nfor a rare OS, is I assume/hope a no-no. So thats a +1 from me for back-\npatching a fix, while removing the check altogether in master.\n\nThe attached converts the check to use AC_LINK_IFELSE, in order to detect the\nmacro as well as the function (the compiled code is omitted for readability).\nThe patch is against master, but the check applies against backbranches except\nfor the AC_CHECK_FUNCS hunk which need tailoring per backbranch. I didn't\nconvert it to an autoconf macro, as there are only two callers in the\nbackbranches and it won't go into HEAD.\n\ncheers ./daniel",
"msg_date": "Thu, 5 Dec 2019 23:29:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 5 Dec 2019, at 15:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'd like to know is whether not\n>> realizing that SSL_clear_options is present causes any functional\n>> issues that would justify back-patching a fix.\n\n> ISTM that SSL_clear_options is required for turning on compression. Since\n> compression was introduced in 1.0.0 and SSL_clear_options was turned into a\n> function in 1.1.0, it affects 1.0.0, 1.0.1 and 1.0.2 with the latter two being\n> quite heavily used. I'm not sure how common it is to enable compression, and\n> especially how common it is post-CRIME, but since the option is there it seems\n> silly for it not to work with highly common library versions. Removing the\n> check only affects NetBSD 5, but breaking compilation in a stable release, even\n> for a rare OS, is I assume/hope a no-no. So thats a +1 from me for back-\n> patching a fix, while removing the check altogether in master.\n\nAgreed that we should do something about this. However, our requirement\nfor 0.9.8 or newer has been there since v10 (cf. 593d4e47d). So I think\nwhat we should do is\n\n(1) Back-patch Michael's\n0002-Remove-configure-checks-for-SSL_clear_options-in-Ope.patch\nfrom the other thread [1] as far as v10.\n\n(2) Use this patch in 9.4-9.6.\n\nIt'd be possible to also backpatch the other thread's\n0001-Remove-configure-checks-for-SSL_get_current_compress.patch\nas far as v10, but I'm less excited about that -- it'd just save\na few configure cycles, no?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20191205083252.GE5064%40paquier.xyz\n\n\n",
"msg_date": "Thu, 05 Dec 2019 19:41:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 07:41:14PM -0500, Tom Lane wrote:\n> It'd be possible to also backpatch the other thread's\n> 0001-Remove-configure-checks-for-SSL_get_current_compress.patch\n> as far as v10, but I'm less excited about that -- it'd just save\n> a few configure cycles, no?\n\nYeah. I'd try not to meddle with stable branches more than necessary,\nand the removal of the part for get_current_compression is just a\ncleanup so I would just do that on HEAD and be done with it.\n\nAbout clear_options, my take is to remove the check on HEAD, and to\napply Daniel's patch on *all* stable branches because I think that we\nshould not break the business that happened with NetBSD 5 on already\nreleased branches. Does that sound good?\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 09:59:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> About clear_options, my take is to remove the check on HEAD, and to\n> apply Daniel's patch on *all* stable branches because I think that we\n> should not break the business that happened with NetBSD 5 on already\n> released branches. Does that sound good?\n\nOK, re-reading the thread, I see the point --- old NetBSD has a weird\nOpenSSL version that this would break. OK, let's stay compatible\nwith that on the back branches. So, your patch on HEAD and Daniel's\nin the back branches is the right thing. Please push.\n\n(Note: I didn't actually test Daniel's patch or read it closely ---\nit looks like about the right thing, but please double check.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Dec 2019 23:40:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 11:40:37PM -0500, Tom Lane wrote:\n> OK, re-reading the thread, I see the point --- old NetBSD has a weird\n> OpenSSL version that this would break. OK, let's stay compatible\n> with that on the back branches. So, your patch on HEAD and Daniel's\n> in the back branches is the right thing. Please push.\n\nThanks, applied. I have tested Daniel's version with OpenSSL 0.9.8,\n1.0.2 and 1.1.0 and the test was able to detect correctly the\nmacro/function in all cases.\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 15:20:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Update minimum SSL version"
}
] |
[
{
"msg_contents": "Hi,\n\nI encountered that the assertion error is reported instead of a proper\nPANIC message when failed to fsync WAL. The cause is that there are\nmultiple places where we call XLogFileNameP function that calls palloc\nduring critical section, for example XLogWrite function.\n\nTRAP: FailedAssertion(\"CritSectionCount == 0 ||\n(context)->allowInCritSection\", File: \"mcxt.c\", Line: 956)\n\nAs far as I can see there are five places we need to fix.I've attached a patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 29 Nov 2019 18:44:58 +0100",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Using XLogFileNameP in critical section"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 06:44:58PM +0100, Masahiko Sawada wrote:\n> I encountered that the assertion error is reported instead of a proper\n> PANIC message when failed to fsync WAL. The cause is that there are\n> multiple places where we call XLogFileNameP function that calls palloc\n> during critical section, for example XLogWrite function.\n> \n> TRAP: FailedAssertion(\"CritSectionCount == 0 ||\n> (context)->allowInCritSection\", File: \"mcxt.c\", Line: 956)\n> \n> As far as I can see there are five places we need to fix. I've\n> attached a patch.\n\n+ msg = \"could not fdatasync file \\\"%s\\\": %m\";\nMissing some translations, no?\n\nYou are missing a couple of code paths in walreceiver.c,\nXLogWalRcvWrite(), where XLogFileNameP is used on a PANIC. This\nbrings me the following points:\n1) If you look closely, all the callers of XLogFileNameP() are used\nfor the generation of error strings.\n2) I am ready to bet that we'll have the same discussion in the future\nbecause somebody will make the same mistake for a new code path.\n\nI think that we had better just get rid of XLogFileNameP() (on HEAD)\nand just change those code paths so as they use a buffer of size\nMAXFNAMELEN, with XLogFileName() generating the file name. This leads\nactually to some simplifications, see for example XLogWalRcvWrite..\n--\nMichael",
"msg_date": "Mon, 2 Dec 2019 15:50:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "On 2019-Dec-02, Michael Paquier wrote:\n\n> I think that we had better just get rid of XLogFileNameP() (on HEAD)\n> and just change those code paths so as they use a buffer of size\n> MAXFNAMELEN, with XLogFileName() generating the file name. This leads\n> actually to some simplifications, see for example XLogWalRcvWrite..\n\n+1, that function is a footgun and there's no possible way to fix it so\nthat it's never misused(*). Also, put a comment atop XLogFileName\nindicating never to create such a \"helper\" function again.\n\n(*) unless there was a way to have a StaticAssert() that the function is\nnot called in a critical section -- have the compiler work for us. I\ndon't think it's possible to write such a thing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Dec 2019 12:56:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "On Mon, 2 Dec 2019 at 16:56, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Dec-02, Michael Paquier wrote:\n>\n> > I think that we had better just get rid of XLogFileNameP() (on HEAD)\n> > and just change those code paths so as they use a buffer of size\n> > MAXFNAMELEN, with XLogFileName() generating the file name. This leads\n> > actually to some simplifications, see for example XLogWalRcvWrite..\n>\n> +1, that function is a footgun and there's no possible way to fix it so\n> that it's never misused(*). Also, put a comment atop XLogFileName\n> indicating never to create such a \"helper\" function again.\n>\n> (*) unless there was a way to have a StaticAssert() that the function is\n> not called in a critical section -- have the compiler work for us. I\n> don't think it's possible to write such a thing.\n\nAgreed. I've attached the updated version patch. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 2 Dec 2019 22:14:58 +0100",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 10:14:58PM +0100, Masahiko Sawada wrote:\n> Agreed. I've attached the updated version patch. Please review it.\n\nThanks, applied on HEAD after a few edits. gettext() does not set\nerrno, so the new style of issue_xlog_fsync() is actually fine.\nPlease note that there was one mistake in the patch: you forgot to\nassign back errno in assign_xlog_sync_method() after generating the\nfile name.\n\nPer the low probability of the failures, I did not backpatch that\nstuff. I quickly looked at applying that further down, and attached\nis a version for v12 FWIW, and I suspect much more conflicts the more\nyou go down (wal segment size added in 11, different code paths for\nreplication, etc.).\n--\nMichael",
"msg_date": "Tue, 3 Dec 2019 15:08:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "On 2019-Dec-03, Michael Paquier wrote:\n\n> Per the low probability of the failures, I did not backpatch that\n> stuff. I quickly looked at applying that further down, and attached\n> is a version for v12 FWIW, and I suspect much more conflicts the more\n> you go down (wal segment size added in 11, different code paths for\n> replication, etc.).\n\nYou didn't attach anything, but I concur about the low probability\naspect: the assertion failure does not occur in production builds\n(obviously); and only an out-of-memory situation is a real problem when\nan fsync fails. Anyway this should be a very localized fix, right?\n\nI'm not sure that the internationalization stuff in issue_xlog_fsync is\ncorrect. I think the _() should be gettext_noop(), or alternatively the\nerrmsg() should be errmsg_internal(); otherwise the translation is\ninvoked twice. (I didn't verify this.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Dec 2019 09:35:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I'm not sure that the internationalization stuff in issue_xlog_fsync is\n> correct. I think the _() should be gettext_noop(), or alternatively the\n> errmsg() should be errmsg_internal(); otherwise the translation is\n> invoked twice. (I didn't verify this.)\n\nAlso, buildfarm member drongo is not happy:\n\npostgres.def : error LNK2001: unresolved external symbol XLogFileNameP [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\nRelease/postgres/postgres.lib : fatal error LNK1120: 1 unresolved externals [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\n\nI'm guessing you missed a reference someplace.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Dec 2019 10:33:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "I wrote:\n> Also, buildfarm member drongo is not happy:\n> postgres.def : error LNK2001: unresolved external symbol XLogFileNameP [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\n> Release/postgres/postgres.lib : fatal error LNK1120: 1 unresolved externals [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\n> I'm guessing you missed a reference someplace.\n\nHm ... grep swears up and down that there is no remaining instance\nof the string \"XLogFileNameP\" anywhere in the tree. So this doesn't\nseem to be the fault of 9989d37d1 per se. What my eye now falls on\nis this, a bit further up in the build log [1]:\n\n...\nPreLinkEvent:\n Generate DEF file\n perl src\\tools\\msvc\\gendef.pl Release\\postgres x64\n :VCEnd\n Not re-generating POSTGRES.DEF, file already exists.\nLink:\n...\n\nSo it seems that the problem might really be a faulty rule in our\nMSVC build script about when postgres.def needs to be regenerated?\nOr else it's some weird caching problem on drongo --- the lack of\ncomplaints from other Windows critters might point the finger\nthat way.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2019-12-03%2007%3A30%3A01\n\n\n",
"msg_date": "Tue, 03 Dec 2019 11:24:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "On Tue, 3 Dec 2019 at 07:09, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 02, 2019 at 10:14:58PM +0100, Masahiko Sawada wrote:\n> > Agreed. I've attached the updated version patch. Please review it.\n>\n> Thanks, applied on HEAD after a few edits. gettext() does not set\n> errno, so the new style of issue_xlog_fsync() is actually fine.\n\nThanks!\n\n> Please note that there was one mistake in the patch: you forgot to\n> assign back errno in assign_xlog_sync_method() after generating the\n> file name.\n\nMy bad. Thank you for fixing it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Dec 2019 21:32:41 +0100",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "On Tue, Dec 03, 2019 at 11:24:57AM -0500, Tom Lane wrote:\n> So it seems that the problem might really be a faulty rule in our\n> MSVC build script about when postgres.def needs to be regenerated?\n> Or else it's some weird caching problem on drongo --- the lack of\n> complaints from other Windows critters might point the finger\n> that way.\n\nYes, I saw the failure from the buildfarm logs, but I got to the\nconclusion that the animal just got crazy with a portion of its\ncaching because there are no more references to routine removed. So I\ndid not bother much.\n\nFWIW, I have seen sometimes similar warnings regarding conflicting\nprojects when doing a compilation on MSVC, applying a patch and then\nrecompiling (because that's just faster than recompiling the whole).\nPerhaps we could do better here. I am not completely sure what\nthough, it's not a problem I have spent much brain-time on.\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 13:55:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "On Tue, Dec 03, 2019 at 09:35:00AM -0300, Alvaro Herrera wrote:\n> You didn't attach anything, but I concur about the low probability\n> aspect: the assertion failure does not occur in production builds\n> (obviously); and only an out-of-memory situation is a real problem\n> when\n> an fsync fails. Anyway this should be a very localized fix, right?\n\nSorry. You get something like the attached. The recent refactoring\nwork you committed in this area causes already conflicts on\nREL_12_STABLE.\n\n> I'm not sure that the internationalization stuff in issue_xlog_fsync\n> is correct. I think the _() should be gettext_noop(), or alternatively\n> the errmsg() should be errmsg_internal(); otherwise the translation is\n> invoked twice. (I didn't verify this.)\n\nHmm. We actually do both in tablecmds.c:ATWrongRelkindError(), and\nthat's the code I was looking at yesterday when thinking about the\nproblem.. However, parse_agg.c, parse_expr.c and parse_func.c among\nothers like vacuumlazy.c use directly errmsg_internal() without\ntranslating the string first. So there is indeed duplicated work for\nboth. Does the attached patch look correct to you?\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 14:48:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
},
{
"msg_contents": "\nOn 12/3/19 11:24 AM, Tom Lane wrote:\n> I wrote:\n>> Also, buildfarm member drongo is not happy:\n>> postgres.def : error LNK2001: unresolved external symbol XLogFileNameP [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\n>> Release/postgres/postgres.lib : fatal error LNK1120: 1 unresolved externals [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\n>> I'm guessing you missed a reference someplace.\n> Hm ... grep swears up and down that there is no remaining instance\n> of the string \"XLogFileNameP\" anywhere in the tree. So this doesn't\n> seem to be the fault of 9989d37d1 per se. What my eye now falls on\n> is this, a bit further up in the build log [1]:\n>\n> ...\n> PreLinkEvent:\n> Generate DEF file\n> perl src\\tools\\msvc\\gendef.pl Release\\postgres x64\n> :VCEnd\n> Not re-generating POSTGRES.DEF, file already exists.\n> Link:\n> ...\n>\n> So it seems that the problem might really be a faulty rule in our\n> MSVC build script about when postgres.def needs to be regenerated?\n> Or else it's some weird caching problem on drongo --- the lack of\n> complaints from other Windows critters might point the finger\n> that way.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2019-12-03%2007%3A30%3A01\n>\n\n\nthis was pilot error on my part. Should be fixed now.\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Fri, 6 Dec 2019 14:30:51 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Using XLogFileNameP in critical section"
}
] |
[
{
"msg_contents": "Folks,\n\nPer a suggestion Christophe made, please find attached a patch to\n$Subject:\n\nApart from carefully fudging with pg_resetwal, and short of running in\nproduction for a few weeks, what would be some good ways to test this?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 29 Nov 2019 19:01:39 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 07:01:39PM +0100, David Fetter wrote:\n> Folks,\n> \n> Per a suggestion Christophe made, please find attached a patch to\n> $Subject:\n> \n> Apart from carefully fudging with pg_resetwal, and short of running in\n> production for a few weeks, what would be some good ways to test this?\n\nPer discussion on IRC with Sehrope Sarkuni, please find attached a\npatch with one fewer bug, this one in the repalloc() calls.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 29 Nov 2019 23:21:53 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "\n\nOn 11/29/19 2:21 PM, David Fetter wrote:\n> On Fri, Nov 29, 2019 at 07:01:39PM +0100, David Fetter wrote:\n>> Folks,\n>>\n>> Per a suggestion Christophe made, please find attached a patch to\n>> $Subject:\n>>\n>> Apart from carefully fudging with pg_resetwal, and short of running in\n>> production for a few weeks, what would be some good ways to test this?\n> \n> Per discussion on IRC with Sehrope Sarkuni, please find attached a\n> patch with one fewer bug, this one in the repalloc() calls.\n\nHello David,\n\nHere are my initial thoughts.\n\nAlthough you appear to be tackling the problem of vacuuming tables\nwith older Xids first *per database*, have you considered changing\nthe logic in building and sorting the database list in get_database_list\nand rebuild_database_list? I'm just curious what your thoughts\nmight be on this subject.\n\nAs far as sorting the list of tables in an array and then copying\nthat array into a linked list, I think there is no need. The\ncopying of table_ages into table_oids is followed immediately by\n\n foreach(cell, table_oids)\n\nand then table_oids seems not to serve any further purpose. Perhaps\nyou can just iterate over table_ages directly and avoid the extra\ncopying.\n\nI have not tested this change, but I may do so later today or perhaps\non Monday.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sat, 30 Nov 2019 10:04:07 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Sat, Nov 30, 2019 at 10:04:07AM -0800, Mark Dilger wrote:\n> On 11/29/19 2:21 PM, David Fetter wrote:\n> > On Fri, Nov 29, 2019 at 07:01:39PM +0100, David Fetter wrote:\n> > > Folks,\n> > > \n> > > Per a suggestion Christophe made, please find attached a patch to\n> > > $Subject:\n> > > \n> > > Apart from carefully fudging with pg_resetwal, and short of running in\n> > > production for a few weeks, what would be some good ways to test this?\n> > \n> > Per discussion on IRC with Sehrope Sarkuni, please find attached a\n> > patch with one fewer bug, this one in the repalloc() calls.\n> \n> Hello David,\n> \n> Here are my initial thoughts.\n> \n> Although you appear to be tackling the problem of vacuuming tables\n> with older Xids first *per database*,\n\nYes, that's what's come up for me in production, but lately,\nproduction has consisted of a single active DB maxing out hardware. I\ncan see how in other situations--multi-tenant, especially--it would\nmake more sense to sort the DBs first.\n\n> have you considered changing the logic in building and sorting the\n> database list in get_database_list and rebuild_database_list? I'm\n> just curious what your thoughts might be on this subject.\n\nI hadn't, but now that you mention it, it seems like a reasonable\nthing to try.\n\n> As far as sorting the list of tables in an array and then copying\n> that array into a linked list, I think there is no need. The\n> copying of table_ages into table_oids is followed immediately by\n> \n> foreach(cell, table_oids)\n> \n> and then table_oids seems not to serve any further purpose. Perhaps\n> you can just iterate over table_ages directly and avoid the extra\n> copying.\n\nI hadn't looked toward any optimizations in this section, given that\nthe vacuums in question can take hours or days, but I can see how that\nwould make the code cleaner, so please find that change attached.\n\n> I have not tested this change, but I may do so later today or perhaps\n> on Monday.\n\nThanks for looking at this!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 30 Nov 2019 23:23:15 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "\n\nOn 11/30/19 2:23 PM, David Fetter wrote:\n> On Sat, Nov 30, 2019 at 10:04:07AM -0800, Mark Dilger wrote:\n>> On 11/29/19 2:21 PM, David Fetter wrote:\n>>> On Fri, Nov 29, 2019 at 07:01:39PM +0100, David Fetter wrote:\n>>>> Folks,\n>>>>\n>>>> Per a suggestion Christophe made, please find attached a patch to\n>>>> $Subject:\n>>>>\n>>>> Apart from carefully fudging with pg_resetwal, and short of running in\n>>>> production for a few weeks, what would be some good ways to test this?\n>>>\n>>> Per discussion on IRC with Sehrope Sarkuni, please find attached a\n>>> patch with one fewer bug, this one in the repalloc() calls.\n>>\n>> Hello David,\n>>\n>> Here are my initial thoughts.\n>>\n>> Although you appear to be tackling the problem of vacuuming tables\n>> with older Xids first *per database*,\n> \n> Yes, that's what's come up for me in production, but lately,\n> production has consisted of a single active DB maxing out hardware. I\n> can see how in other situations--multi-tenant, especially--it would\n> make more sense to sort the DBs first.\n\nI notice you don't address that in your latest patch. Do you have\nany thoughts on whether that needs to be handled in this patch?\nShould tackling that problem be left for later?\n\n>> have you considered changing the logic in building and sorting the\n>> database list in get_database_list and rebuild_database_list? I'm\n>> just curious what your thoughts might be on this subject.\n> \n> I hadn't, but now that you mention it, it seems like a reasonable\n> thing to try.\n> \n>> As far as sorting the list of tables in an array and then copying\n>> that array into a linked list, I think there is no need. The\n>> copying of table_ages into table_oids is followed immediately by\n>>\n>> foreach(cell, table_oids)\n>>\n>> and then table_oids seems not to serve any further purpose. Perhaps\n>> you can just iterate over table_ages directly and avoid the extra\n>> copying.\n> \n> I hadn't looked toward any optimizations in this section, given that\n> the vacuums in question can take hours or days, but I can see how that\n> would make the code cleaner, so please find that change attached.\n\nThat looks better, thanks!\n\n>> I have not tested this change, but I may do so later today or perhaps\n>> on Monday.\n\nThe code compiles cleanly and passes all regression tests, but I don't\nthink those tests really cover what you are changing. Have you been\nusing any test framework for this?\n\nI wonder if you might add information about table size, table changes,\nand bloat to your RelFrozenXidAge struct and modify rfxa_comparator to\nuse a heuristic to cost the (age, size, bloat, changed) grouping and\nsort on that cost, such that really large bloated tables with old xids\nmight get vacuumed before smaller, less bloated tables that have\neven older xids. Sorting the tables based purely on xid_age seems to\nignore other factors that are worth considering. I do not have a\nformula for how those four factors should be weighted in the heuristic,\nbut you are implicitly assigning three of them a weight of zero in\nyour current patch.\n\nrelation_needs_vacanalyze currently checks the reltuples, n_dead_tuples\nand changes_since_analyze along with vac_scale_factor and\nanl_scale_factor for the relation, but only returns booleans dovacuum,\ndoanalyze, and wraparound. If you pass your RelFrozenXidAge struct\n(perhaps renamed) into relation_needs_vacanalyze, it could store those\nvalues for the relation so that you don't need to look it up again when\nsorting.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 12 Dec 2019 08:02:25 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 08:02:25AM -0800, Mark Dilger wrote:\n> On 11/30/19 2:23 PM, David Fetter wrote:\n> > On Sat, Nov 30, 2019 at 10:04:07AM -0800, Mark Dilger wrote:\n> > > On 11/29/19 2:21 PM, David Fetter wrote:\n> > > > On Fri, Nov 29, 2019 at 07:01:39PM +0100, David Fetter wrote:\n> > > > > Folks,\n> > > > > \n> > > > > Per a suggestion Christophe made, please find attached a patch to\n> > > > > $Subject:\n> > > > > \n> > > > > Apart from carefully fudging with pg_resetwal, and short of running in\n> > > > > production for a few weeks, what would be some good ways to test this?\n> > > > \n> > > > Per discussion on IRC with Sehrope Sarkuni, please find attached a\n> > > > patch with one fewer bug, this one in the repalloc() calls.\n> > > \n> > > Hello David,\n> > > \n> > > Here are my initial thoughts.\n> > > \n> > > Although you appear to be tackling the problem of vacuuming tables\n> > > with older Xids first *per database*,\n> > \n> > Yes, that's what's come up for me in production, but lately,\n> > production has consisted of a single active DB maxing out hardware. I\n> > can see how in other situations--multi-tenant, especially--it would\n> > make more sense to sort the DBs first.\n> \n> I notice you don't address that in your latest patch. Do you have\n> any thoughts on whether that needs to be handled in this patch?\n\nMy thought is that it doesn't.\n\n> > > I have not tested this change, but I may do so later today or perhaps\n> > > on Monday.\n> \n> The code compiles cleanly and passes all regression tests, but I don't\n> think those tests really cover what you are changing. Have you been\n> using any test framework for this?\n\nI don't have one :/\n\n> I wonder if you might add information about table size, table changes,\n> and bloat to your RelFrozenXidAge struct and modify rfxa_comparator to\n> use a heuristic to cost the (age, size, bloat, changed) grouping and\n> sort on that cost, such that really large bloated tables with old xids\n> might get vacuumed before smaller, less bloated tables that have\n> even older xids. Sorting the tables based purely on xid_age seems to\n> ignore other factors that are worth considering. I do not have a\n> formula for how those four factors should be weighted in the heuristic,\n> but you are implicitly assigning three of them a weight of zero in\n> your current patch.\n\nI think it's vastly premature to come up with complex sorting systems\nright now. Just sorting in descending order of age should either have\nor not have positive effects.\n\n> relation_needs_vacanalyze currently checks the reltuples, n_dead_tuples\n> and changes_since_analyze along with vac_scale_factor and\n> anl_scale_factor for the relation, but only returns booleans dovacuum,\n> doanalyze, and wraparound.\n\nYeah, I looked at that. It's for a vastly different purpose, namely\ndeciding what's an emergency and what's probably not, but needs\nattention anyhow. My goal was something a little finer-grained and, I\nhope, a little easier to establish the (lack of) benefits because only\none thing is getting changed.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 12 Dec 2019 20:26:32 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "\n\nOn 12/12/19 11:26 AM, David Fetter wrote:\n> On Thu, Dec 12, 2019 at 08:02:25AM -0800, Mark Dilger wrote:\n>> On 11/30/19 2:23 PM, David Fetter wrote:\n>>> On Sat, Nov 30, 2019 at 10:04:07AM -0800, Mark Dilger wrote:\n>>>> On 11/29/19 2:21 PM, David Fetter wrote:\n>>>>> On Fri, Nov 29, 2019 at 07:01:39PM +0100, David Fetter wrote:\n>>>>>> Folks,\n>>>>>>\n>>>>>> Per a suggestion Christophe made, please find attached a patch to\n>>>>>> $Subject:\n>>>>>>\n>>>>>> Apart from carefully fudging with pg_resetwal, and short of running in\n>>>>>> production for a few weeks, what would be some good ways to test this?\n>>>>>\n>>>>> Per discussion on IRC with Sehrope Sarkuni, please find attached a\n>>>>> patch with one fewer bug, this one in the repalloc() calls.\n>>>>\n>>>> Hello David,\n>>>>\n>>>> Here are my initial thoughts.\n>>>>\n>>>> Although you appear to be tackling the problem of vacuuming tables\n>>>> with older Xids first *per database*,\n>>>\n>>> Yes, that's what's come up for me in production, but lately,\n>>> production has consisted of a single active DB maxing out hardware. I\n>>> can see how in other situations--multi-tenant, especially--it would\n>>> make more sense to sort the DBs first.\n>>\n>> I notice you don't address that in your latest patch. Do you have\n>> any thoughts on whether that needs to be handled in this patch?\n> \n> My thought is that it doesn't.\n\nI can live with that for now. I'd like the design to be compatible with\nrevisiting that in a subsequent patch.\n\n>>>> I have not tested this change, but I may do so later today or perhaps\n>>>> on Monday.\n>>\n>> The code compiles cleanly and passes all regression tests, but I don't\n>> think those tests really cover what you are changing. Have you been\n>> using any test framework for this?\n> \n> I don't have one :/\n\nWe need to get that fixed.\n\n>> I wonder if you might add information about table size, table changes,\n>> and bloat to your RelFrozenXidAge struct and modify rfxa_comparator to\n>> use a heuristic to cost the (age, size, bloat, changed) grouping and\n>> sort on that cost, such that really large bloated tables with old xids\n>> might get vacuumed before smaller, less bloated tables that have\n>> even older xids. Sorting the tables based purely on xid_age seems to\n>> ignore other factors that are worth considering. I do not have a\n>> formula for how those four factors should be weighted in the heuristic,\n>> but you are implicitly assigning three of them a weight of zero in\n>> your current patch.\n> \n> I think it's vastly premature to come up with complex sorting systems\n> right now. Just sorting in descending order of age should either have\n> or not have positive effects.\n\nI hear what you are saying, but I'm going to argue the other side.\n\n Let C = 1.00000002065\n Let x = xid_age for a table\n Let v = clamp(n_dead_tuples / reltuples*2) to max 0.5\n Let a = clamp(changes_since_analyze / reltuples) to max 0.5\n\n Let score = C**x + v + a\n\nWith x = 1 million => C**x = 1.02\n x = 200 million => C**x = 62.2\n x = 2**32 => C**x = FLT_MAX - delta\n\nThe maximum contribution to the score that n_dead_tuples and\nchanges_since_analyze can make is 1.0. Once the xid age reaches one\nmillion, it will start to be the dominant factor. By the time it\nreaches the default value of 200 million for freeze_max_age it is\nfar and away the dominant factor, and the xid age of one table vs.\nanother never overflows FLT_MAX given that 2**32 is the largest\nxid age your current system can store in the uint32 you are using.\n\nThe computed score is a 32 bit float, which takes no more memory\nto store than the xid_age field you are storing. So storing the\nscore rather than the xid age is memory-wise equivalent to your\npatch.\n\nI doubt the computation time for the exponential is relevant\ncompared to the n*log(n) average sorting time of the quicksort.\nIt is even less relevant compared to the time it takes to vacuum\nthe tables. I doubt my proposal has a measurable run-time impact.\n\nOn the upside, if you have a database with autovacuum configured\naggressively, you can get the tables with the most need vacuumed\nfirst, with need computed relative to vac_scale_factor and\nanl_scale_factor, which helps for a different use case than yours.\nThe xid age problem might not exist for databases where autovacuum\nhas enough resources to never fall behind. Those databases will\nhave other priorities for where autovacuum spends its time.\n\nI'm imagining coming back with two patches later, one that does\nsomething more about choosing which database to vacuum first, and\nanother that recomputes which table to vacuum next when a worker\nfinishes vacuuming a table. These combined could help keep tables\nthat are sensitive to statistics changes vacuumed more frequently\nthan others.\n\n>> relation_needs_vacanalyze currently checks the reltuples, n_dead_tuples\n>> and changes_since_analyze along with vac_scale_factor and\n>> anl_scale_factor for the relation, but only returns booleans dovacuum,\n>> doanalyze, and wraparound.\n> \n> Yeah, I looked at that. It's for a vastly different purpose, namely\n> deciding what's an emergency and what's probably not, but needs\n> attention anyhow. My goal was something a little finer-grained and, I\n> hope, a little easier to establish the (lack of) benefits because only\n> one thing is getting changed.\n\nThat's all I'll say for now. Hopefully other members of the\ncommunity will weigh in.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 12 Dec 2019 13:35:49 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "\n\nOn 12/12/19 1:35 PM, Mark Dilger wrote:\n> Let C = 1.00000002065\n> Let x = xid_age for a table\n> Let v = clamp(n_dead_tuples / reltuples*2) to max 0.5\n> Let a = clamp(changes_since_analyze / reltuples) to max 0.5\n> \n> Let score = C**x + v + a\n\nI should hasten to add that this is just a proof of concept\nformula, not one that I'm specifically advocating. The point\nis that we can devise a scoring system in which the xid age\nis the dominant factor whenever it could possibly matter,\nwhile still letting other factors prevail when xid age is\nof little consequence.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 12 Dec 2019 13:47:23 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "\n\nOn 12/12/19 1:35 PM, Mark Dilger wrote:\n> Once the xid age reaches one\n> million, it will start to be the dominant factor.\n\nActually, it doesn't change much from x = 1 to x = 1,000,000\nbut I was planning to add another factor to the formula and\nforgot before sending the email. I'll leave that as an\nexercise for the reader.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 12 Dec 2019 14:30:47 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 12:47 PM David Fetter <david@fetter.org> wrote:\n> Per a suggestion Christophe made, please find attached a patch to\n> $Subject:\n\nCurious, what's the benefit of autovacuum handling the oldest tables\nfirst? If there is a related thread with the discussion, I couldn't find\nit.\n\n> Apart from carefully fudging with pg_resetwal, and short of running in\n> production for a few weeks, what would be some good ways to test this?\n\nGreenplum tests autovacuum using a fault injection framework, which\nwas once proposed by Asim [1] and a function to consume xids [2].\n\nIf that isn't an option, maybe you could acquire a vacuum blocking\nlock on a table, for example by creating index on it inside a dangling\ntransaction. Then after autovacuum worker blocks, in a separate\nsession you could check that a previously older table is now younger.\nDoes that suffice?\n\n\nThanks,\nDavid\n\n[1] https://www.postgresql.org/message-id/CANXE4TdxdESX1jKw48xet-5GvBFVSq=4cgNeioTQff372KO45A@mail.gmail.com\n[2] https://github.com/greenplum-db/gpdb/blob/5feccaae6838e68b1443e46ed39d162613c5ece8/src/test/regress/regress_gp.c#L2003\n\n\n",
"msg_date": "Tue, 7 Jan 2020 17:40:12 -0800",
"msg_from": "David Kimura <david.g.kimura@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 2:26 PM David Fetter <david@fetter.org> wrote:\n> > I wonder if you might add information about table size, table changes,\n> > and bloat to your RelFrozenXidAge struct and modify rfxa_comparator to\n> > use a heuristic to cost the (age, size, bloat, changed) grouping and\n> > sort on that cost, such that really large bloated tables with old xids\n> > might get vacuumed before smaller, less bloated tables that have\n> > even older xids. Sorting the tables based purely on xid_age seems to\n> > ignore other factors that are worth considering. I do not have a\n> > formula for how those four factors should be weighted in the heuristic,\n> > but you are implicitly assigning three of them a weight of zero in\n> > your current patch.\n>\n> I think it's vastly premature to come up with complex sorting systems\n> right now. Just sorting in descending order of age should either have\n> or not have positive effects.\n\nA lot of previous efforts to improve autovacuum scheduling have fallen\ndown precisely because they did something that was so simple that it\nwas doomed to regress as many cases as it improved, so I wouldn't be\ntoo quick to dismiss Mark's suggestion. In general, sorting by XID age\nseems like it should be better, but it's not hard to come up with a\ncounterexample: suppose table T1 is going to wrap around in 4 hours\nand takes 4 hours to vacuum, but table T2 is going to wrap around in 2\nhours and takes 1 hour to vacuum. Your algorithm will prioritize T2,\nbut it's better to prioritize T1. A second autovacuum worker may\nbecome available for this database later and still get T2 done before\nwe run into trouble, but if we don't start T1 right now, we're hosed.\nThe current algorithm gets this right if T1 was defined before T2 and\nthus appears earlier in pg_class; your algorithm gets it wrong\nregardless.\n\nI've had the thought for a while now that perhaps we ought to try to\nestimate the rate of XID consumption, because without that it's really\nhard to make smart decisions. In the above example, if the rate of XID\nconsumption is 4x slower, then it might be smarter to vacuum T2 first,\nespecially if T2 is very heavily updated compared to T1 and might\nbloat if we don't deal with it right away. At the lower rate of XID\nconsumption, T1 is an urgent problem, but not yet an emergency.\nHowever, I've noticed that most people who complain about unexpected\nwraparound vacuums have them hit in peak periods, which when you think\nabout it, makes a lot of sense. If you consume XIDs 10x as fast during\nyour busy time as your non-busy times, then the XID that triggers the\nwraparound scan on any given table is very likely to occur during a\nbusy period. So the *current* rate of XID consumption might not be\nvery informative, which makes figuring out what to do here awfully\ntricky.\n\nI think Mark's suggestion of some kind of formula that takes into\naccount the XID age as well as table size and bloat is probably a\npretty good one. We'll probably need to make some of the parameters of\nthat formula configurable. Ideally, they'll be easy enough to\nunderstand that users can say \"oh, I'm using XIDs more or less quickly\nthan normal here, so I need to change parameter X\" and even figure out\n-- without using a calculator -- what sort of value for X might be\nappropriate.\n\nWhen there's a replication slot or prepared transaction or open\ntransaction holding back xmin, you can't advance the relfrozenxid of\nthat table past that point no matter how aggressively you vacuum it,\nso it would probably be a good idea to set up the formula so that the\nweight is based on the amount by which we think we'll be able to\nadvance relfrozenxid rather than, say, the age relative to the last\nXID assigned.\n\nThe dominant cost of vacuuming a table is often the number and size of\nthe indexes rather than the size of the heap, particularly because the\nvisibility map may permit skipping a lot of the heap. So you have N\nindexes that need to be read completely and 1 heap that needs to be\nread only partially. So, whatever portion of the score comes from\nestimating the cost of vacuuming that table ought to factor in the\nsize of the indexes. Perhaps it should also consider the contents of\nthe visibility map, although I'm less sure about that.\n\nOne problem with the exponential in Mark's formula is that it might\ntreat small XID differences between old tables as more important than\nthey really are. I wonder if it might be a better idea to compute\nseveral different quantities and use the maximum from among them as\nthe prioritization. We can model the priority of vacuuming a\nparticular table as the benefit of vacuuming that table multiplied by\nthe effort. The effort is easy to model: just take the size of the\ntable and its indexes. The benefit is trickier, because there are four\ndifferent possible benefits: relfrozenxid advancement, relminmxid\nadvancement, dead tuple removal, and marking pages all-visible. So,\nsuppose we model each benefit by a separate equation. For XID\nadvancement, figure figure out the difference between relfrozenxid and\nRecentGlobalXmin; if it's less than vacuum_freeze_min_age, then 0;\nelse multiply the amount in excess of vacuum_freeze_min_age by some\nconstant. Analogously for MXID advancement. For bloat, the number of\ndead tuples multiplied by some other constant, presumably smaller. For\nmarking pages all-visible, if we want to factor that in, the number of\npages that are not currently all-visible multiplied by the smallest\nconstant of all. Take the highest of those benefits and multiple by\nthe size of the table and its indexes to find the priority.\n\nWhatever formula we use exactly, we want XID-age to be the dominant\nconsideration for tables that are in real wraparound danger, but, I\nthink, not to the complete exclusion of table size and bloat\nconsiderations. There is certainly a point at which a table is so near\nwraparound that it needs to take precedence over tables that are just\nbeing vacuumed for bloat, but you don't want that to happen\nunnecessarily, because bloat is *really* bad. And you don't\nnecessarily just have one table in wraparound danger; if there are\nmultiples, you want to choose between them intelligently, and the fact\nthat relfrozenxid differs by 1 shouldn't dominate a 2x difference in\nthe on-disk size.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 12:23:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 12:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Dec 12, 2019 at 2:26 PM David Fetter <david@fetter.org> wrote:\n> > > I wonder if you might add information about table size, table changes,\n> > > and bloat to your RelFrozenXidAge struct and modify rfxa_comparator to\n> > > use a heuristic to cost the (age, size, bloat, changed) grouping and\n> > > sort on that cost, such that really large bloated tables with old xids\n> > > might get vacuumed before smaller, less bloated tables that have\n> > > even older xids. Sorting the tables based purely on xid_age seems to\n> > > ignore other factors that are worth considering. I do not have a\n> > > formula for how those four factors should be weighted in the heuristic,\n> > > but you are implicitly assigning three of them a weight of zero in\n> > > your current patch.\n> >\n> > I think it's vastly premature to come up with complex sorting systems\n> > right now. Just sorting in descending order of age should either have\n> > or not have positive effects.\n>\n> A lot of previous efforts to improve autovacuum scheduling have fallen\n> down precisely because they did something that was so simple that it\n> was doomed to regress as many cases as it improved, so I wouldn't be\n> too quick to dismiss Mark's suggestion. In general, sorting by XID age\n> seems like it should be better, but it's not hard to come up with a\n> counterexample: suppose table T1 is going to wrap around in 4 hours\n> and takes 4 hours to vacuum, but table T2 is going to wrap around in 2\n> hours and takes 1 hour to vacuum.\n\nAh, so primary purpose of this patch is to add smarts when autovacuum\nis triggered to handle wrap around?\n\n> I've had the thought for a while now that perhaps we ought to try to\n> estimate the rate of XID consumption, because without that it's really\n> hard to make smart decisions.\n\nVery interesting.\n\nCould there be value in making this feature more preventative, perhaps\nby triggering emergency autovacuum earlier based on some combination\nof these heuristics rather than autovacuum_freeze_max_age alone?\n\nThanks,\nDavid\n\n\n",
"msg_date": "Thu, 9 Jan 2020 12:55:03 -0800",
"msg_from": "David Kimura <david.g.kimura@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 12:23:46PM -0500, Robert Haas wrote:\n> On Thu, Dec 12, 2019 at 2:26 PM David Fetter <david@fetter.org> wrote:\n> > > I wonder if you might add information about table size, table changes,\n> > > and bloat to your RelFrozenXidAge struct and modify rfxa_comparator to\n> > > use a heuristic to cost the (age, size, bloat, changed) grouping and\n> > > sort on that cost, such that really large bloated tables with old xids\n> > > might get vacuumed before smaller, less bloated tables that have\n> > > even older xids. Sorting the tables based purely on xid_age seems to\n> > > ignore other factors that are worth considering. I do not have a\n> > > formula for how those four factors should be weighted in the heuristic,\n> > > but you are implicitly assigning three of them a weight of zero in\n> > > your current patch.\n> >\n> > I think it's vastly premature to come up with complex sorting systems\n> > right now. Just sorting in descending order of age should either have\n> > or not have positive effects.\n> \n> A lot of previous efforts to improve autovacuum scheduling have fallen\n> down precisely because they did something that was so simple that it\n> was doomed to regress as many cases as it improved, so I wouldn't be\n> too quick to dismiss Mark's suggestion. In general, sorting by XID age\n> seems like it should be better, but it's not hard to come up with a\n> counterexample: suppose table T1 is going to wrap around in 4 hours\n> and takes 4 hours to vacuum, but table T2 is going to wrap around in 2\n> hours and takes 1 hour to vacuum. Your algorithm will prioritize T2,\n> but it's better to prioritize T1. A second autovacuum worker may\n> become available for this database later and still get T2 done before\n> we run into trouble, but if we don't start T1 right now, we're hosed.\n> The current algorithm gets this right if T1 was defined before T2 and\n> thus appears earlier in pg_class; your algorithm gets it wrong\n> regardless.\n\nDoes it get it more wrong than the current system where there's\nessentially no attempt to set priorities? If so, how?\n\n> I've had the thought for a while now that perhaps we ought to try to\n> estimate the rate of XID consumption, because without that it's really\n> hard to make smart decisions. In the above example, if the rate of XID\n> consumption is 4x slower, then it might be smarter to vacuum T2 first,\n> especially if T2 is very heavily updated compared to T1 and might\n> bloat if we don't deal with it right away. At the lower rate of XID\n> consumption, T1 is an urgent problem, but not yet an emergency.\n> However, I've noticed that most people who complain about unexpected\n> wraparound vacuums have them hit in peak periods, which when you think\n> about it, makes a lot of sense. If you consume XIDs 10x as fast during\n> your busy time as your non-busy times, then the XID that triggers the\n> wraparound scan on any given table is very likely to occur during a\n> busy period. So the *current* rate of XID consumption might not be\n> very informative, which makes figuring out what to do here awfully\n> tricky.\n> \n> I think Mark's suggestion of some kind of formula that takes into\n> account the XID age as well as table size and bloat is probably a\n> pretty good one. We'll probably need to make some of the parameters of\n> that formula configurable. Ideally, they'll be easy enough to\n> understand that users can say \"oh, I'm using XIDs more or less quickly\n> than normal here, so I need to change parameter X\" and even figure out\n> -- without using a calculator -- what sort of value for X might be\n> appropriate.\n> \n> When there's a replication slot or prepared transaction or open\n> transaction holding back xmin, you can't advance the relfrozenxid of\n> that table past that point no matter how aggressively you vacuum it,\n> so it would probably be a good idea to set up the formula so that the\n> weight is based on the amount by which we think we'll be able to\n> advance relfrozenxid rather than, say, the age relative to the last\n> XID assigned.\n> \n> The dominant cost of vacuuming a table is often the number and size of\n> the indexes rather than the size of the heap, particularly because the\n> visibility map may permit skipping a lot of the heap. So you have N\n> indexes that need to be read completely and 1 heap that needs to be\n> read only partially. So, whatever portion of the score comes from\n> estimating the cost of vacuuming that table ought to factor in the\n> size of the indexes. Perhaps it should also consider the contents of\n> the visibility map, although I'm less sure about that.\n> \n> One problem with the exponential in Mark's formula is that it might\n> treat small XID differences between old tables as more important than\n> they really are. I wonder if it might be a better idea to compute\n> several different quantities and use the maximum from among them as\n> the prioritization. We can model the priority of vacuuming a\n> particular table as the benefit of vacuuming that table multiplied by\n> the effort. The effort is easy to model: just take the size of the\n> table and its indexes. The benefit is trickier, because there are four\n> different possible benefits: relfrozenxid advancement, relminmxid\n> advancement, dead tuple removal, and marking pages all-visible. So,\n> suppose we model each benefit by a separate equation. For XID\n> advancement, figure figure out the difference between relfrozenxid and\n> RecentGlobalXmin; if it's less than vacuum_freeze_min_age, then 0;\n> else multiply the amount in excess of vacuum_freeze_min_age by some\n> constant. Analogously for MXID advancement. For bloat, the number of\n> dead tuples multiplied by some other constant, presumably smaller. For\n> marking pages all-visible, if we want to factor that in, the number of\n> pages that are not currently all-visible multiplied by the smallest\n> constant of all. Take the highest of those benefits and multiple by\n> the size of the table and its indexes to find the priority.\n\nThis is all sounding like really important work into the future.\n\n> Whatever formula we use exactly, we want XID-age to be the dominant\n> consideration for tables that are in real wraparound danger,\n\n...which is what this patch does.\n\n> but, I think, not to the complete exclusion of table size and bloat\n> considerations. There is certainly a point at which a table is so\n> near wraparound that it needs to take precedence over tables that\n> are just being vacuumed for bloat, but you don't want that to happen\n> unnecessarily, because bloat is *really* bad. And you don't\n> necessarily just have one table in wraparound danger; if there are\n> multiples, you want to choose between them intelligently, and the\n> fact that relfrozenxid differs by 1 shouldn't dominate a 2x\n> difference in the on-disk size.\n\nI agree that it's a complex situation, and that many different\napproaches will eventually need to be brought to bear.\n\nWhat concerns me about introducing a big lump of complexity here is\ndisentangling the effects of each part and of their interaction terms.\nWe're not, to put it mildly, set up to do ANOVA\n(https://en.wikipedia.org/wiki/Analysis_of_variance ) , ANCOVA (\nhttps://en.wikipedia.org/wiki/Analysis_of_covariance ), etc. on\nchanges.\n\nGiven the above, I'd like to make the case for changing just this one\nthing at first and seeing whether the difference it makes is generally\npositive.\n\nFuture patches could build on those results.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 11 Jan 2020 18:53:36 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On 1/11/20 12:53 PM, David Fetter wrote:\n> \n> I agree that it's a complex situation, and that many different\n> approaches will eventually need to be brought to bear.\n> \n> What concerns me about introducing a big lump of complexity here is\n> disentangling the effects of each part and of their interaction terms.\n> We're not, to put it mildly, set up to do ANOVA\n> (https://en.wikipedia.org/wiki/Analysis_of_variance ) , ANCOVA (\n> https://en.wikipedia.org/wiki/Analysis_of_covariance ), etc. on\n> changes.\n> \n> Given the above, I'd like to make the case for changing just this one\n> thing at first and seeing whether the difference it makes is generally\n> positive.\n\nMark, Robert, thoughts on this?\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 30 Mar 2020 10:09:33 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "\n\n> On Mar 30, 2020, at 7:09 AM, David Steele <david@pgmasters.net> wrote:\n> \n> On 1/11/20 12:53 PM, David Fetter wrote:\n>> I agree that it's a complex situation, and that many different\n>> approaches will eventually need to be brought to bear.\n>> What concerns me about introducing a big lump of complexity here is\n>> disentangling the effects of each part and of their interaction terms.\n>> We're not, to put it mildly, set up to do ANOVA\n>> (https://en.wikipedia.org/wiki/Analysis_of_variance ) , ANCOVA (\n>> https://en.wikipedia.org/wiki/Analysis_of_covariance ), etc. on\n>> changes.\n>> Given the above, I'd like to make the case for changing just this one\n>> thing at first and seeing whether the difference it makes is generally\n>> positive.\n> \n> Mark, Robert, thoughts on this?\n\nI have not been working on this issue lately, but as I recall, my concern was that changing the behavior of autovacuum could introduce regressions for some users, so we should be careful to get it right before we rush to release anything. It didn't seem like the proposed changes took enough into account. But that's clearly a judgement call, having to do with how cautious any particular person thinks we should be. I don't feel strongly enough to stand in the way if the general concensus is that this is a good enough implementation.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 30 Mar 2020 09:20:15 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 09:20:15AM -0700, Mark Dilger wrote:\n> I have not been working on this issue lately, but as I recall, my\n> concern was that changing the behavior of autovacuum could introduce\n> regressions for some users, so we should be careful to get it right\n> before we rush to release anything. It didn't seem like the\n> proposed changes took enough into account. But that's clearly a\n> judgement call, having to do with how cautious any particular person\n> thinks we should be. I don't feel strongly enough to stand in the\n> way if the general concensus is that this is a good enough\n> implementation. \n\nEchoing with what has been already mentioned on this thread, I think\nthat autovacuum scheduling is a hard problem, and I would be rather\nscared to change by default a behavior that has proved to work in some\ncases, but could potentially doom others. I have an idea though: we\ncould make the scheduling behavior of autovacuum optional.\n\nAnyway, the thread has stalled for a couple of months now, and we\ndon't have a clear consensus about this approach, so I am marking this\nthread as returned with feedback.\n--\nMichael",
"msg_date": "Mon, 3 Aug 2020 15:53:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make autovacuum sort tables in descending order of xid_age"
}
] |
[
{
"msg_contents": "While reviewing the patches in\nhttp://postgr.es/m/20191126.213752.2132434859202124793.horikyota.ntt@gmail.com\nI noticed three related problems. The defects motivating that other thread\nare specific to wal_level=minimal. The related problems are not specific to\nany wal_level, hence my starting a separate thread.\n\n\n1. We don't sync unlogged tables after skipFsync=true operations.\n\nsmgrwrite() explains the contract for skipFsync=true. Concretely, it causes\nmd.c to skip enqueueing the relation to be synced during the next checkpoint.\nThree places in the tree call smgrextend(..., skipFsync=true) and/or\nsmgrwrite(..., skipFsync=true):\n\n rewriteheap.c (CLUSTER, VACUUM FULL)\n _bt_blwritepage() (CREATE INDEX)\n RelationCopyStorage() (ALTER TABLE SET TABLESPACE)\n\nThey use logic like this to decide when to sync:\n\n\t/*\n\t * If the rel is WAL-logged, must fsync before commit. We use heap_sync\n\t * to ensure that the toast table gets fsync'd too. (For a temp or\n\t * unlogged rel we don't care since the data will be gone after a crash\n\t * anyway.)\n\t *\n\t * It's obvious that we must do this when not WAL-logging the copy. It's\n\t * less obvious that we have to do it even if we did WAL-log the copied\n\t * pages. The reason is that since we're copying outside shared buffers, a\n\t * CHECKPOINT occurring during the copy has no way to flush the previously\n\t * written data to disk (indeed it won't know the new rel even exists). A\n\t * crash later on would replay WAL from the checkpoint, therefore it\n\t * wouldn't replay our earlier WAL entries. If we do not fsync those pages\n\t * here, they might still not be on disk when the crash occurs.\n\t */\n\tif (relpersistence == RELPERSISTENCE_PERMANENT || copying_initfork)\n\t\tsmgrimmedsync(dst, forkNum);\n\nHowever, the reasoning about unlogged rels is incomplete. Normally, we sync\nunlogged rels during each checkpoint after we wrote out a dirty buffer, just\nas we do for permanent rels. (It would be enough to sync them during the\nshutdown checkpoint. That would require a data structure to track unlogged\nrelation files dirtied since StartupXLOG().) However, due to the code above,\nwe miss syncing them if one of these DDL operations is the last operation\nchanging the unlogged rel before a shutdown checkpoint. I've attached a\ntest-only patch demonstrating the problem. This can cause unlogged tables to\nhave invalid contents if an OS crash happens with PostgreSQL stopped.\n\nI think the fix is simple: test \"relpersistence != RELPERSISTENCE_TEMP\"\ninstead.\n\n\n2. RelationTruncate() needs defense against concurrent checkpoints.\n\nIn RelationTruncate(), nothing stops a checkpoint from starting and completing\nafter XLogInsert(... XLOG_SMGR_TRUNCATE) and before smgrtruncate(). That\ncheckpoint could move the redo pointer past the XLOG_SMGR_TRUNCATE record,\nmaking the outcome equivalent to having never written the WAL record. If the\nOS crashes before the first post-smgrtruncate() checkpoint, the filesystem may\nforget some or all ftruncate() calls. The symptom would be unexpected tuples\nin the relation. Example test procedure:\n\nBEGIN;\nCREATE TABLE t (c) AS SELECT 1;\n-- set breakpoint at XLogFlush()\nTRUNCATE t; -- hit breakpoint\n-- while stopped at breakpoint, issue CHECKPOINT, then release debugger\nCOMMIT;\n-- hard crash forgets some ftruncate()\npg_ctl -w start # REDO does not reissue ftruncate()\n\nThe fix is to set delayChkpt before writing WAL and clear it after\nsmgrtruncate(), like we do in EndPrepare().\n\n\n3. smgrtruncate()/mdtruncate() is not qualified to be called in recovery.\n\nsmgr_redo() calls smgrtruncate() to replay XLOG_SMGR_TRUNCATE records.\nHowever, mdtruncate() relies on the following md.c invariant:\n\n *\tOn disk, a relation must consist of consecutively numbered segment\n *\tfiles in the pattern\n *\t\t-- Zero or more full segments of exactly RELSEG_SIZE blocks each\n *\t\t-- Exactly one partial segment of size 0 <= size < RELSEG_SIZE blocks\n *\t\t-- Optionally, any number of inactive segments of size 0 blocks.\n\nThat invariant does not hold before reachedConsistency. Suppose the unclean\nshutdown happened after the original (non-recovery) mdtruncate() and before\nanything synced the truncated segments. The OS might remember the ftruncate()\nof segment zero and lose the ftruncate() of segment one, upsetting the\ninvariant. The symptom would again be unexpected tuples in the relation.\n\nMy preferred fix is to make mdtruncate() truncate each inactive segment,\nstopping when opening a segment fails with ENOENT, like mdunlinkfork() does.\nAn alternative would be to add a last_segment_truncated field to\nxl_smgr_truncate, then truncate up to that segment.\n\n\nDoes anyone see a way to improve on those proposed fixes?\n\nThanks,\nnm",
"msg_date": "Sat, 30 Nov 2019 01:05:14 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Recovery vs. RelationTruncate(); skipFsync vs. unlogged rels"
}
] |
[
{
"msg_contents": "Hello,\n\nSteps to reproduce:\n\nselect extract(epoch from '2001-09-09 01:46:39.999999'::timestamp)\n\nreturns 999999999.999999 as expected\n\nwhile\n\nselect extract(epoch from '2001-09-09 01:46:40.000021'::timestamp)\n\nreturns 1000000000.00002 - 1 microsecond is truncated.\n\nObviously, it is due to the fact that extract epoch returns double\nprecision which in turn has 15 decimal digits precision.\n\nWhile there is a pretty simple workaround in C, that returns\nmicroseconds since Unix epoch:\n\nDatum\nto_microseconds(PG_FUNCTION_ARGS) {\n Timestamp arg = PG_GETARG_TIMESTAMP(0)+946684800000000;\n PG_RETURN_INT64(arg);\n}\n\nI was not able to find the other way of doing that (i.e. without C\nfunction).\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 30 Nov 2019 12:28:18 +0300",
"msg_from": "Petr Fedorov <petr.fedorov@phystech.edu>",
"msg_from_op": true,
"msg_subject": "Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Petr Fedorov <petr.fedorov@phystech.edu> writes:\n> select extract(epoch from '2001-09-09 01:46:40.000021'::timestamp)\n> returns 1000000000.00002 - 1 microsecond is truncated.\n> Obviously, it is due to the fact that extract epoch returns double\n> precision which in turn has 15 decimal digits precision.\n\nI can't get very excited about this. However, it might be worth\nnoting that v12 and HEAD print \"1000000000.000021\" as expected,\nthanks to the Ryu float output code. You can get that from older\nbranches as well if you set extra_float_digits = 1.\n\nBy my arithmetic, IEEE float8 ought to be able to represent\nmicroseconds accurately out to about 285 years either way from the\n1970 epoch, so for practical purposes it'll be fine for a long time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Nov 2019 10:21:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:\n> Obviously, it is due to the fact that extract epoch returns double\n> precision which in turn has 15 decimal digits precision.\n\nI guess this deviation from the SQL standard (\"exact numeric\") made\nsense when PostgreSQL used double for timestamps, but would break a\nlot of queries if we changed it.\n\n\n",
"msg_date": "Mon, 2 Dec 2019 11:50:00 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:\n>> Obviously, it is due to the fact that extract epoch returns double\n>> precision which in turn has 15 decimal digits precision.\n\n> I guess this deviation from the SQL standard (\"exact numeric\") made\n> sense when PostgreSQL used double for timestamps, but would break a\n> lot of queries if we changed it.\n\nHmmm ... well, now that you mention it, would it really break things\nif we made it return numeric? There's an implicit cast to float8,\nso it seems like queries requiring that type would still work.\n\nThere might be a performance-related argument against switching,\nperhaps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Dec 2019 17:59:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "It appears that extract epoch returns double precision, not float8. And\nthe program below seems to be demonstrating that there are enough\n'floating-point numbers' as defined by IEEE-754 to represent\n1000000000.000021 precisely enough:\n\n#include <cmath>\n#include <iostream>\n#include <iomanip>\n#include <limits>\n\nint main() {\n double from = 1000000000.000020;\n std::cout << std::setprecision(56) << from << \" (\" << std::hexfloat <<\nfrom << \") \" << std::endl;\n for(auto i = 0; i < 15; ++i) {\n double to = std::nextafter( from, std::numeric_limits<double>::max());\n std::cout << std::defaultfloat << to << std::hexfloat << \" (\" << to\n<< \") \" << std::endl;\n from = to;\n }\n}\n\nOutputs:\n\n1000000000.00002002716064453125 (0x1.dcd65000000a8p+29)\n1000000000.00002014636993408203125 (0x1.dcd65000000a9p+29)\n1000000000.0000202655792236328125 (0x1.dcd65000000aap+29)\n1000000000.00002038478851318359375 (0x1.dcd65000000abp+29)\n1000000000.000020503997802734375 (0x1.dcd65000000acp+29)\n1000000000.00002062320709228515625 (0x1.dcd65000000adp+29)\n1000000000.0000207424163818359375 (0x1.dcd65000000aep+29)\n1000000000.00002086162567138671875 (0x1.dcd65000000afp+29)\n1000000000.0000209808349609375 (0x1.dcd65000000bp+29)\n1000000000.00002110004425048828125 (0x1.dcd65000000b1p+29)\n1000000000.0000212192535400390625 (0x1.dcd65000000b2p+29)\n1000000000.00002133846282958984375 (0x1.dcd65000000b3p+29)\n1000000000.000021457672119140625 (0x1.dcd65000000b4p+29)\n1000000000.00002157688140869140625 (0x1.dcd65000000b5p+29)\n1000000000.0000216960906982421875 (0x1.dcd65000000b6p+29)\n1000000000.00002181529998779296875 (0x1.dcd65000000b7p+29)\n\nI'm not an expert in floating point math but hopefully it means that no\ntype change is required - double precision can handle it. \n\nAnd since it works correctly on v12 for this particular date may be all\nwhat is needed it to verify that it works for the other dates too! For\nexample what was changed in v12 (comparing to 11.6 I use) so extract\nepoch works correctly?\n\n\n02.12.2019 01:59, Tom Lane пишет:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:\n>>> Obviously, it is due to the fact that extract epoch returns double\n>>> precision which in turn has 15 decimal digits precision.\n>> I guess this deviation from the SQL standard (\"exact numeric\") made\n>> sense when PostgreSQL used double for timestamps, but would break a\n>> lot of queries if we changed it.\n> Hmmm ... well, now that you mention it, would it really break things\n> if we made it return numeric? There's an implicit cast to float8,\n> so it seems like queries requiring that type would still work.\n>\n> There might be a performance-related argument against switching,\n> perhaps.\n>\n> \t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Mon, 2 Dec 2019 14:08:43 +0300",
"msg_from": "Petr Fedorov <petr.fedorov@phystech.edu>",
"msg_from_op": true,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 12:08 AM Petr Fedorov <petr.fedorov@phystech.edu> wrote:\n> It appears that extract epoch returns double precision, not float8. And\n> the program below seems to be demonstrating that there are enough\n> 'floating-point numbers' as defined by IEEE-754 to represent\n> 1000000000.000021 precisely enough:\n\nDouble precision and float8 are different names for the same type in PostgreSQL.\n\n> I'm not an expert in floating point math but hopefully it means that no\n> type change is required - double precision can handle it.\n\nMe neither, but the SQL standard requires us to use an exact numeric\ntype, so it's wrong on that level by definition.\n\nIt's also wrong because binary floating point numbers can't represent\n0.000001 (one microsecond represented as seconds) exactly, and that's\nour unit of counting for timestamps. You can get pretty far by\nthinking of the decimal number you see on the screen as the true\nnumber and the double as a fuzzy internal storage or transport that\ndoes the job just fine due to the round trip conversion guarantee\nprovided by DBL_DIG, but the double is still going to have the wrong\nvalue in some cases. As soon as you start doing any arithmetic or\ncomparisons with the double directly, interesting things can start to\nhappen to make the error visible and break things; for example\n0.1::float8 + 0.2::float8 = 0.3::float8 is false.\n\n> And since it works correctly on v12 for this particular date may be all\n> what is needed it to verify that it works for the other dates too! For\n> example what was changed in v12 (comparing to 11.6 I use) so extract\n> epoch works correctly?\n\nPostgreSQL 12 adopted a different algorithm[1] for converting float8\nto text that can affect how many digits are shown, as Tom explained.\nThe manual has some notes about it[2].\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=02ddd499322ab6f2f0d58692955dc9633c2150fc\n[2] https://www.postgresql.org/docs/12/datatype-numeric.html#DATATYPE-FLOAT\n\n\n",
"msg_date": "Tue, 3 Dec 2019 11:52:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 2019-12-02 23:52, Thomas Munro wrote:\n>> I'm not an expert in floating point math but hopefully it means that no\n>> type change is required - double precision can handle it.\n> Me neither, but the SQL standard requires us to use an exact numeric\n> type, so it's wrong on that level by definition.\n\nI looked into this (changing the return types of date_part()/extract() \nfrom float8 to numeric).\n\nOne problem (other than perhaps performance, tbd.) is that this would no \nlonger allow processing infinite timestamps, since numeric does not \nsupport infinity. It could be argued that running extract() on infinite \ntimestamps isn't very useful, but it's something to consider explicitly.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 25 May 2020 15:28:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> One problem (other than perhaps performance, tbd.) is that this would no \n> longer allow processing infinite timestamps, since numeric does not \n> support infinity. It could be argued that running extract() on infinite \n> timestamps isn't very useful, but it's something to consider explicitly.\n\nI wonder if it's time to fix that, ie introduce +-Infinity into numeric.c.\nThis isn't the first time we've seen issues with numeric not being a\nsuperset of float, and it won't be the last.\n\nAt first glance there's no free bits in the on-disk format for numeric,\nbut we could do something by defining the low-order bits of the header\nword for a NaN to distinguish between real NaN and +/- infinity.\nIt looks like those bits should reliably be zero right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 May 2020 09:43:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 5/25/20 3:28 PM, Peter Eisentraut wrote:\n> On 2019-12-02 23:52, Thomas Munro wrote:\n>>> I'm not an expert in floating point math but hopefully it means that no\n>>> type change is required - double precision can handle it.\n>> Me neither, but the SQL standard requires us to use an exact numeric\n>> type, so it's wrong on that level by definition.\n> \n> I looked into this (changing the return types of date_part()/extract()\n> from float8 to numeric).\n\nI think what would be better is to have a specific date_part function\nfor each part and have extract translate to the appropriate one. This\nis particularly interesting for epoch but it would also allow us to\nreturn the correct type mandated by the spec.\n\n(I would also accept a specific date_part per return type instead of per\npart, that would probably even be better.)\n-- \nVik Fearing\n\n\n",
"msg_date": "Mon, 25 May 2020 18:01:07 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 5/25/20 3:28 PM, Peter Eisentraut wrote:\n>> I looked into this (changing the return types of date_part()/extract()\n>> from float8 to numeric).\n\n> I think what would be better is to have a specific date_part function\n> for each part and have extract translate to the appropriate one.\n\nDoesn't really work for upwards compatibility with existing views,\nwhich will have calls to date_part(text, ...) embedded in them.\n\nActually, now that I think about it, changing the result type of\ndate_part() is likely to be problematic anyway for such cases.\nIt's not going to be good if pg_upgrade's dump/restore of a view\nresults in a new output column type; especially if it's a\nmaterialized view.\n\nSo maybe what we'd have to do is leave date_part() alone for\nlegacy compatibility, and invent new functions that the extract()\nsyntax would now be translated to. While at it, maybe we could\nfix things so that the syntax reverse-lists the same way instead\nof injecting Postgres-isms...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 May 2020 12:40:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 5/25/20 6:40 PM, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> On 5/25/20 3:28 PM, Peter Eisentraut wrote:\n>>> I looked into this (changing the return types of date_part()/extract()\n>>> from float8 to numeric).\n> \n>> I think what would be better is to have a specific date_part function\n>> for each part and have extract translate to the appropriate one.\n> \n> Doesn't really work for upwards compatibility with existing views,\n> which will have calls to date_part(text, ...) embedded in them.\n> \n> Actually, now that I think about it, changing the result type of\n> date_part() is likely to be problematic anyway for such cases.\n> It's not going to be good if pg_upgrade's dump/restore of a view\n> results in a new output column type; especially if it's a\n> materialized view.\n> \n> So maybe what we'd have to do is leave date_part() alone for\n> legacy compatibility, and invent new functions that the extract()\n> syntax would now be translated to.\n\n\nI'm sorry, I wasn't clear. I was suggesting adding new functions while\nalso keeping the current generic function. So exactly what you say in\nthat last paragraph.\n\nAlthough <extract expression> has a fixed list of constant parts,\ndate_part() allows the part to be variable. So we need to keep it\nanyway for cases like this contrived example:\n\n SELECT date_part(p, now())\n FROM UNNEST(ARRAY['epoch', 'year', 'second']) AS u (p)\n\n\n> While at it, maybe we could\n> fix things so that the syntax reverse-lists the same way instead\n> of injecting Postgres-isms...\n\n\nI'm not sure what this means.\n-- \nVik Fearing\n\n\n",
"msg_date": "Mon, 25 May 2020 18:52:01 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 5/25/20 6:40 PM, Tom Lane wrote:\n>> While at it, maybe we could\n>> fix things so that the syntax reverse-lists the same way instead\n>> of injecting Postgres-isms...\n\n> I'm not sure what this means.\n\nThis:\n\nregression=# create view myview as select extract(year from current_timestamp) as y;\nCREATE VIEW\nregression=# \\d+ myview\n...\nView definition:\n SELECT date_part('year'::text, CURRENT_TIMESTAMP) AS y;\n\nWhat had been a 100% spec-compliant view definition is now quite\nPostgres-specific. I fixed some similar problems in 0bb51aa96 (before\nthat, the CURRENT_TIMESTAMP part would've reverse-listed differently\ntoo); but I didn't tackle EXTRACT(), SUBSTRING(), and other cases.\n\nI'm not claiming that we really need to fix all of those. But if we are\ngoing to pick nits about which data type EXTRACT() returns then I think\nit's legit to worry about its reverse-list representation at the same\ntime ... especially if we must touch the grammar's translation anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 May 2020 13:07:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "I wrote:\n> What had been a 100% spec-compliant view definition is now quite\n> Postgres-specific. I fixed some similar problems in 0bb51aa96 (before\n> that, the CURRENT_TIMESTAMP part would've reverse-listed differently\n> too); but I didn't tackle EXTRACT(), SUBSTRING(), and other cases.\n> I'm not claiming that we really need to fix all of those. But if we are\n> going to pick nits about which data type EXTRACT() returns then I think\n> it's legit to worry about its reverse-list representation at the same\n> time ... especially if we must touch the grammar's translation anyway.\n\nBTW, shortly after sending that I had an idea about how to do it without\nadding a boatload of new parsetree infrastructure, which has been the\nmain reason why nobody has wanted to tackle it. The obvious way to do\nthis is to make a new kind of expression node, but that cascades into\nlots and lots of places (see 0bb51aa96, plus the later commits that\nfixed oversights in it :-(). It's a lot of work for a mostly-cosmetic\nissue.\n\nHowever: suppose that we continue to translate these things into FuncExpr\nnodes, the same as always, but we add a new CoercionForm variant, say\nCOERCE_SQL_SYNTAX. 99% of the system ignores FuncExpr.funcformat,\nand would continue to do so, but ruleutils.c would take it to mean\nthat (1) the call should be reverse-listed as some special SQL syntax\nand (2) the funcid is one of a small set of built-in functions for\nwhich ruleutils.c knows what to emit. (If it doesn't recognize the\nfuncid, it could either throw an error, or fall back to normal display\nof the node.) For cases such as EXTRACT, this would also represent\na promise that specific arguments are Const nodes from which the\ndesired keyword can be extracted.\n\nThis is kind of an abuse of \"CoercionForm\", since that typedef name\nimplies that it only talks about how to handle cast cases, but\nsemantically it's always been a how-to-display-function-calls thing.\nWe could either hold our noses about that or rename the typedef.\n\nIf we went this way then we could easily clean up most of the other\nweird-SQL-syntax function call cases, incrementally over time,\nwithout a lot of additional work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 May 2020 13:35:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On Mon, May 25, 2020 at 09:43:32AM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > One problem (other than perhaps performance, tbd.) is that this would no \n> > longer allow processing infinite timestamps, since numeric does not \n> > support infinity. It could be argued that running extract() on infinite \n> > timestamps isn't very useful, but it's something to consider explicitly.\n> \n> I wonder if it's time to fix that, ie introduce +-Infinity into numeric.c.\n> This isn't the first time we've seen issues with numeric not being a\n> superset of float, and it won't be the last.\n> \n> At first glance there's no free bits in the on-disk format for numeric,\n> but we could do something by defining the low-order bits of the header\n> word for a NaN to distinguish between real NaN and +/- infinity.\n> It looks like those bits should reliably be zero right now.\n\n+1 for adding +/- infinity to NUMERIC.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 25 May 2020 21:29:21 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost\n when extracting epoch"
},
{
"msg_contents": "On 2020-05-25 15:28, Peter Eisentraut wrote:\n> On 2019-12-02 23:52, Thomas Munro wrote:\n>>> I'm not an expert in floating point math but hopefully it means that no\n>>> type change is required - double precision can handle it.\n>> Me neither, but the SQL standard requires us to use an exact numeric\n>> type, so it's wrong on that level by definition.\n> \n> I looked into this (changing the return types of date_part()/extract()\n> from float8 to numeric).\n> \n> One problem (other than perhaps performance, tbd.) is that this would no\n> longer allow processing infinite timestamps, since numeric does not\n> support infinity. It could be argued that running extract() on infinite\n> timestamps isn't very useful, but it's something to consider explicitly.\n\nNow that numeric supports infinity, here is a patch that changes the \nreturn types of date_part() to numeric. It's not meant to be a final \nversion, but it is useful for discussing a few things.\n\nThe internal implementation could be made a bit more elegant if we had \nvariants of int4_numeric() and int8_numeric() that don't have to go \nthrough fmgr. This would also help in other areas of the code. There \nare probably also other ways in which the internals could be made more \ncompact; I just converted them fairly directly.\n\nWhen extracting seconds or microseconds, I made it always produce 6 or 3 \ndecimal places, even if they are zero. I don't know if we want that or \nwhat behavior we want. That's what all the changes in the regression \ntests are about. Everything else passes unchanged.\n\nThe 'julian' field is a bit of a mystery. First of all it's not \ndocumented. The regression tests only test the rounded output, perhaps \nto avoid floating point differences. When you do date_part('julian', \ndate), then you get a correct Julian Day. But date_part('julian', \ntimestamp[tz]) gives incorrect Julian Date values that are off by 12 \nhours. My patch doesn't change that, I just noticed when I took away \nthe round() call in the regression tests. Those calls now produce a \ndifferent number of decimal places.\n\nIt might make sense to make date_part(..., date) a separate C function \ninstead of an SQL wrapper around date_part(..., timestamp). That could \nreturn integer and could reject nonsensical fields such as \"minute\". \nThen we could also make a less contorted implementation of \ndate_part('julian', date) that matches to_char(date, 'J') and remove the \nincorrect implementation of date_part('julian', timestamp).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 4 Aug 2020 16:08:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "út 4. 8. 2020 v 16:08 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2020-05-25 15:28, Peter Eisentraut wrote:\n> > On 2019-12-02 23:52, Thomas Munro wrote:\n> >>> I'm not an expert in floating point math but hopefully it means that no\n> >>> type change is required - double precision can handle it.\n> >> Me neither, but the SQL standard requires us to use an exact numeric\n> >> type, so it's wrong on that level by definition.\n> >\n> > I looked into this (changing the return types of date_part()/extract()\n> > from float8 to numeric).\n> >\n> > One problem (other than perhaps performance, tbd.) is that this would no\n> > longer allow processing infinite timestamps, since numeric does not\n> > support infinity. It could be argued that running extract() on infinite\n> > timestamps isn't very useful, but it's something to consider explicitly.\n>\n> Now that numeric supports infinity, here is a patch that changes the\n> return types of date_part() to numeric. It's not meant to be a final\n> version, but it is useful for discussing a few things.\n>\n> The internal implementation could be made a bit more elegant if we had\n> variants of int4_numeric() and int8_numeric() that don't have to go\n> through fmgr. This would also help in other areas of the code. There\n> are probably also other ways in which the internals could be made more\n> compact; I just converted them fairly directly.\n>\n> When extracting seconds or microseconds, I made it always produce 6 or 3\n> decimal places, even if they are zero. I don't know if we want that or\n> what behavior we want. That's what all the changes in the regression\n> tests are about. Everything else passes unchanged.\n>\n> The 'julian' field is a bit of a mystery. First of all it's not\n> documented. The regression tests only test the rounded output, perhaps\n> to avoid floating point differences. When you do date_part('julian',\n> date), then you get a correct Julian Day. But date_part('julian',\n> timestamp[tz]) gives incorrect Julian Date values that are off by 12\n> hours. My patch doesn't change that, I just noticed when I took away\n> the round() call in the regression tests. Those calls now produce a\n> different number of decimal places.\n>\n> It might make sense to make date_part(..., date) a separate C function\n> instead of an SQL wrapper around date_part(..., timestamp). That could\n> return integer and could reject nonsensical fields such as \"minute\".\n> Then we could also make a less contorted implementation of\n> date_part('julian', date) that matches to_char(date, 'J') and remove the\n> incorrect implementation of date_part('julian', timestamp).\n>\n\nI like a idea to have d date variant of date_part\n\nPavel\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nút 4. 8. 2020 v 16:08 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-05-25 15:28, Peter Eisentraut wrote:\n> On 2019-12-02 23:52, Thomas Munro wrote:\n>>> I'm not an expert in floating point math but hopefully it means that no\n>>> type change is required - double precision can handle it.\n>> Me neither, but the SQL standard requires us to use an exact numeric\n>> type, so it's wrong on that level by definition.\n> \n> I looked into this (changing the return types of date_part()/extract()\n> from float8 to numeric).\n> \n> One problem (other than perhaps performance, tbd.) is that this would no\n> longer allow processing infinite timestamps, since numeric does not\n> support infinity. It could be argued that running extract() on infinite\n> timestamps isn't very useful, but it's something to consider explicitly.\n\nNow that numeric supports infinity, here is a patch that changes the \nreturn types of date_part() to numeric. It's not meant to be a final \nversion, but it is useful for discussing a few things.\n\nThe internal implementation could be made a bit more elegant if we had \nvariants of int4_numeric() and int8_numeric() that don't have to go \nthrough fmgr. This would also help in other areas of the code. There \nare probably also other ways in which the internals could be made more \ncompact; I just converted them fairly directly.\n\nWhen extracting seconds or microseconds, I made it always produce 6 or 3 \ndecimal places, even if they are zero. I don't know if we want that or \nwhat behavior we want. That's what all the changes in the regression \ntests are about. Everything else passes unchanged.\n\nThe 'julian' field is a bit of a mystery. First of all it's not \ndocumented. The regression tests only test the rounded output, perhaps \nto avoid floating point differences. When you do date_part('julian', \ndate), then you get a correct Julian Day. But date_part('julian', \ntimestamp[tz]) gives incorrect Julian Date values that are off by 12 \nhours. My patch doesn't change that, I just noticed when I took away \nthe round() call in the regression tests. Those calls now produce a \ndifferent number of decimal places.\n\nIt might make sense to make date_part(..., date) a separate C function \ninstead of an SQL wrapper around date_part(..., timestamp). That could \nreturn integer and could reject nonsensical fields such as \"minute\". \nThen we could also make a less contorted implementation of \ndate_part('julian', date) that matches to_char(date, 'J') and remove the \nincorrect implementation of date_part('julian', timestamp).I like a idea to have d date variant of date_partPavel\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 4 Aug 2020 16:21:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Here is a new patch series version.\n\nI have created a new internal function for converting integers to \nnumeric, to make the implementation a bit more elegant and compact.\n\nI have also created a new date_part(..., date) in C, and added more test \ncoverage for that.\n\nOther than some of the semantic issues mentioned in the previous \nmessage, this version looks pretty good to me in principle.\n\nI have done some performance tests to assess the impact of changing from \nfloat to numeric. I did tests like this:\n\ncreate table t1 (a int, b timestamp with time zone);\ninsert into t1 select generate_series(1, 10000000), current_timestamp + \nrandom() * interval '1000 days';\n\nselect extract(dow from b) from t1 \\g /dev/null\nselect extract(epoch from b) from t1 \\g /dev/null\n\nThere appears to be about a 20% increase in run time for these tests. \nThese are obviously extreme tests, so I think that would be okay. More \ntests and testing ideas are welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Aug 2020 09:04:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Here is a new patch series version.\n> I have created a new internal function for converting integers to \n> numeric, to make the implementation a bit more elegant and compact.\n\nI reviewed the 0002 patch, finding one bug (in int8_sum) and a few\nmore calls of int8_numeric that could be converted. I think the\nattached updated version is committable, and I'd recommend going\nahead with that regardless of the rest of this. I hadn't realized\nhow many random calls of int8_numeric and int4_numeric we'd grown,\nbut there are a lot, so this is nice cleanup.\n\nI continue to think that we can't commit 0003 in this form, because\nof the breakage that will ensure in stored views. As I said upthread,\nwe should leave the existing SQL-exposed functions alone, invent\nnew ones that return numeric, and alter the parser to translate\nEXTRACT constructs to the new functions. This approach would also\nprovide an \"out\" for anyone who does complain about the performance\ncost --- they can just continue to use the old functions.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 06 Sep 2020 19:46:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "po 7. 9. 2020 v 1:46 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > Here is a new patch series version.\n> > I have created a new internal function for converting integers to\n> > numeric, to make the implementation a bit more elegant and compact.\n>\n> I reviewed the 0002 patch, finding one bug (in int8_sum) and a few\n> more calls of int8_numeric that could be converted. I think the\n> attached updated version is committable, and I'd recommend going\n> ahead with that regardless of the rest of this. I hadn't realized\n> how many random calls of int8_numeric and int4_numeric we'd grown,\n> but there are a lot, so this is nice cleanup.\n>\n\nThis patch is a clean win.\n\n+1\n\n\n> I continue to think that we can't commit 0003 in this form, because\n> of the breakage that will ensure in stored views. As I said upthread,\n> we should leave the existing SQL-exposed functions alone, invent\n> new ones that return numeric, and alter the parser to translate\n> EXTRACT constructs to the new functions. This approach would also\n> provide an \"out\" for anyone who does complain about the performance\n> cost --- they can just continue to use the old functions.\n>\n\n+1\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\npo 7. 9. 2020 v 1:46 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Here is a new patch series version.\n> I have created a new internal function for converting integers to \n> numeric, to make the implementation a bit more elegant and compact.\n\nI reviewed the 0002 patch, finding one bug (in int8_sum) and a few\nmore calls of int8_numeric that could be converted. I think the\nattached updated version is committable, and I'd recommend going\nahead with that regardless of the rest of this. I hadn't realized\nhow many random calls of int8_numeric and int4_numeric we'd grown,\nbut there are a lot, so this is nice cleanup.This patch is a clean win. +1 \n\nI continue to think that we can't commit 0003 in this form, because\nof the breakage that will ensure in stored views. As I said upthread,\nwe should leave the existing SQL-exposed functions alone, invent\nnew ones that return numeric, and alter the parser to translate\nEXTRACT constructs to the new functions. This approach would also\nprovide an \"out\" for anyone who does complain about the performance\ncost --- they can just continue to use the old functions.+1 RegardsPavel\n\n regards, tom lane",
"msg_date": "Mon, 7 Sep 2020 05:46:57 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 2020-09-07 01:46, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Here is a new patch series version.\n>> I have created a new internal function for converting integers to\n>> numeric, to make the implementation a bit more elegant and compact.\n> \n> I reviewed the 0002 patch, finding one bug (in int8_sum)\n\nOuch, no test coverage. Should we perhaps remove this function, since \nit's obsolete and unused?\n\n> and a few\n> more calls of int8_numeric that could be converted. I think the\n> attached updated version is committable, and I'd recommend going\n> ahead with that regardless of the rest of this. I hadn't realized\n> how many random calls of int8_numeric and int4_numeric we'd grown,\n> but there are a lot, so this is nice cleanup.\n\nYes, please go ahead with it.\n\n> I continue to think that we can't commit 0003 in this form, because\n> of the breakage that will ensure in stored views. As I said upthread,\n> we should leave the existing SQL-exposed functions alone, invent\n> new ones that return numeric, and alter the parser to translate\n> EXTRACT constructs to the new functions. This approach would also\n> provide an \"out\" for anyone who does complain about the performance\n> cost --- they can just continue to use the old functions.\n\nOkay, I will continue looking into this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 9 Sep 2020 10:08:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-09-07 01:46, Tom Lane wrote:\n>> I reviewed the 0002 patch, finding one bug (in int8_sum)\n\n> Ouch, no test coverage. Should we perhaps remove this function, since \n> it's obsolete and unused?\n\nI don't feel a need to.\n\n>> and a few\n>> more calls of int8_numeric that could be converted. I think the\n>> attached updated version is committable, and I'd recommend going\n>> ahead with that regardless of the rest of this. I hadn't realized\n>> how many random calls of int8_numeric and int4_numeric we'd grown,\n>> but there are a lot, so this is nice cleanup.\n\n> Yes, please go ahead with it.\n\nIt's your patch, I figured you'd want to commit it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Sep 2020 09:38:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 2020-09-09 15:38, Tom Lane wrote:\n>>> and a few\n>>> more calls of int8_numeric that could be converted. I think the\n>>> attached updated version is committable, and I'd recommend going\n>>> ahead with that regardless of the rest of this. I hadn't realized\n>>> how many random calls of int8_numeric and int4_numeric we'd grown,\n>>> but there are a lot, so this is nice cleanup.\n> \n>> Yes, please go ahead with it.\n> \n> It's your patch, I figured you'd want to commit it.\n\nok done\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 9 Sep 2020 20:47:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On Wed, Sep 09, 2020 at 08:47:36PM +0200, Peter Eisentraut wrote:\n> ok done\n\nAs far as I can see, patches 0001 and 0002 have been already applied,\nbut not 0003. Could you send a rebase to allow the CF bot to run, at\nleast?\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 14:53:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost\n when extracting epoch"
},
{
"msg_contents": "On Thu, Sep 17, 2020 at 02:53:27PM +0900, Michael Paquier wrote:\n> As far as I can see, patches 0001 and 0002 have been already applied,\n> but not 0003. Could you send a rebase to allow the CF bot to run, at\n> least?\n\nThis was two weeks ago. Looking at 0003, the thing is not really\ncomplicated, but as this thread has stalled I have marked the entry as\nRwF.\n--\nMichael",
"msg_date": "Wed, 30 Sep 2020 16:15:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost\n when extracting epoch"
},
{
"msg_contents": "I wrote:\n> However: suppose that we continue to translate these things into FuncExpr\n> nodes, the same as always, but we add a new CoercionForm variant, say\n> COERCE_SQL_SYNTAX. 99% of the system ignores FuncExpr.funcformat,\n> and would continue to do so, but ruleutils.c would take it to mean\n> that (1) the call should be reverse-listed as some special SQL syntax\n> and (2) the funcid is one of a small set of built-in functions for\n> which ruleutils.c knows what to emit. (If it doesn't recognize the\n> funcid, it could either throw an error, or fall back to normal display\n> of the node.) For cases such as EXTRACT, this would also represent\n> a promise that specific arguments are Const nodes from which the\n> desired keyword can be extracted.\n\nAttached is a draft patch that does this. I'm fairly pleased with it,\nbut there are some loose ends as described below. As the patch stands,\nit reverse-lists all our special-format function call syntaxes\n*except* EXTRACT. I left that out since I think we want to apply the\nreverse-listing change when we add the numeric-output extraction\nfunctions, as I said upthread.\n\nThe main thing that's incomplete here is that the switch on function\nOID fails to cover some cases that ought to be covered, as a result\nof limitations of Gen_fmgrtab.pl:\n\n* Some C functions such as text_substr have multiple pg_proc entries,\nand Gen_fmgrtab.pl chooses the wrong one for our purpose. We could\neither invent new Gen_fmgrtab.pl behavior to allow having macros for\nall the pg_proc entries, or we could add duplicate C functions so that\nthe pg_proc entries can point to different C symbols.\n\n* Some of the functions we need to reference aren't C functions at\nall, but SQL functions, for instance OID 1305 is defined as\n\tselect ($1, ($1 + $2)) overlaps ($3, ($3 + $4))\nI think our best bet here is to replace these SQL definitions with\nC equivalents, because really this implementation is pretty sucky.\nEven if we manage to inline the SQL definition, that's expensive\nto do; and evaluating some of the arguments twice is not nice either.\n\n> This is kind of an abuse of \"CoercionForm\", since that typedef name\n> implies that it only talks about how to handle cast cases, but\n> semantically it's always been a how-to-display-function-calls thing.\n> We could either hold our noses about that or rename the typedef.\n\nI did nothing about that here, since it'd bloat the patch without\nmaking anything but cosmetic changes. I'm tempted to propose though\nthat we rename \"CoercionForm\" to \"DisplayForm\" and rename its\nCOERCE_XXX values to DISPLAY_XXX, to make this less confusing.\n\nAnother bit of follow-up work we could contemplate is to get rid of\nthe SQLValueFunction node type, since there's nothing it does that\nwe couldn't do with regular FuncExpr nodes and COERCE_SQL_SYNTAX.\nBut that's just cleanup, and I don't think it would save a very\nlarge amount of code.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 31 Oct 2020 19:57:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "I wrote:\n> Attached is a draft patch that does this. I'm fairly pleased with it,\n> but there are some loose ends as described below. As the patch stands,\n> it reverse-lists all our special-format function call syntaxes\n> *except* EXTRACT. I left that out since I think we want to apply the\n> reverse-listing change when we add the numeric-output extraction\n> functions, as I said upthread.\n\n> The main thing that's incomplete here is that the switch on function\n> OID fails to cover some cases that ought to be covered, as a result\n> of limitations of Gen_fmgrtab.pl:\n\nNow that 8e1f37c07 fixed that, here's a complete version, with better\ntest coverage. (I still think we might want to rewrite those SQL\nfunctions as C, but that can be an independent project now.)\n\nRemaining open issues:\n\n* I notice that this will sometimes transform non-SQL-spec syntax\ninto SQL-spec, for example\n\n# explain verbose select substring(now()::text, 'foo');\n QUERY PLAN \n-----------------------------------------------------\n Result (cost=0.00..0.02 rows=1 width=32)\n Output: SUBSTRING((now())::text FROM 'foo'::text)\n(2 rows)\n\nI'm not sure that that satisfies the POLA. This particular case is\nespecially not great, because this is really textregexsubstr() which\nis *not* SQL compatible, so the display is more than a bit misleading.\nThe reason this happens is that we've included expr_list as a variant of\nsubstr_list, so that the func_expr_common_subexpr production has no idea\nwhether the argument list was really special syntax or not. What I'm\ninclined to do, but have not done yet, is to split that apart into\nseparate variants so that when the SQL-spec decoration is not used we\njust generate a perfectly vanilla FuncCall. In fact, I'd sort of argue\nthat we should not force the function to be sought in pg_catalog in such\na case either. The comments in substr_list claim that we're trying to\nallow extension functions named substring(), but using SystemFuncName is\n100% hostile to that.\n\n* Still waiting for comments on whether to rename CoercionForm.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 02 Nov 2020 17:50:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "I wrote:\n> * I notice that this will sometimes transform non-SQL-spec syntax\n> into SQL-spec, for example ...\n> I'm not sure that that satisfies the POLA. This particular case is\n> especially not great, because this is really textregexsubstr() which\n> is *not* SQL compatible, so the display is more than a bit misleading.\n\nActually, the problem there is that I made ruleutils.c willing to\nreverse-list textregexsubstr() in SQL syntax, which it really shouldn't\nsince there is no such function per SQL. So deleting that \"case\" value\nis enough to fix most of the problem. Still:\n\n> ... In fact, I'd sort of argue\n> that we should not force the function to be sought in pg_catalog in such\n> a case either. The comments in substr_list claim that we're trying to\n> allow extension functions named substring(), but using SystemFuncName is\n> 100% hostile to that.\n\n... this seems like a reasonable argument. However, in the attached\nI only did that for SUBSTRING and OVERLAY. I had thought of doing\nit for POSITION and TRIM, but both of those are weird enough that\nallowing a \"normal function call\" seems error-prone. For example,\nthe fact that TRIM(expr_list) works out as a call to btrim() is a mess,\nbut I don't think we can change it. (But of course you can still call a\nuser-defined trim() function if you double-quote the function name.)\n\nI did get rid of the empty variant for position_list, which AFAICS\nhas no value except adding confusion: there are no zero-argument\nfunctions named \"position\" in pg_catalog.\n\nI feel like this is committable at this point --- any objections?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 03 Nov 2020 19:22:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Here is a new patch for this. This now follows the implementation that \nTom has suggested: Leave date_part() alone, add a new set of extract() \nfunctions, and map the SQL EXTRACT construct to those. I have basically \njust copied over the implementations from my previous patch and placed \nthem next to the existing date_part() implementations. So all the \nbehavior is still the same as in the previous patches.\n\nOne thing I still need to look into is how to not lose all the test \ncoverage for date_part(). But that should be fairly mechanical, so I'm \nleaving it off in this version.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/",
"msg_date": "Tue, 15 Dec 2020 15:03:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 12/15/20 9:03 AM, Peter Eisentraut wrote:\n> Here is a new patch for this.� This now follows the implementation that \n> Tom has suggested:� Leave date_part() alone, add a new set of extract() \n> functions, and map the SQL EXTRACT construct to those.� I have basically \n> just copied over the implementations from my previous patch and placed \n> them next to the existing date_part() implementations.� So all the \n> behavior is still the same as in the previous patches.\n> \n> One thing I still need to look into is how to not lose all the test \n> coverage for date_part().� But that should be fairly mechanical, so I'm \n> leaving it off in this version.\n\nTom, what do you think of the updated patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 15 Mar 2021 08:18:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 12/15/20 9:03 AM, Peter Eisentraut wrote:\n>> Here is a new patch for this. This now follows the implementation that \n>> Tom has suggested: Leave date_part() alone, add a new set of extract() \n>> functions, and map the SQL EXTRACT construct to those. I have basically \n>> just copied over the implementations from my previous patch and placed \n>> them next to the existing date_part() implementations. So all the \n>> behavior is still the same as in the previous patches.\n>> \n>> One thing I still need to look into is how to not lose all the test \n>> coverage for date_part(). But that should be fairly mechanical, so I'm \n>> leaving it off in this version.\n\n> Tom, what do you think of the updated patch?\n\nOh, I didn't think I was on the hook to review this ;-)\n\nAnyway, taking a quick look at the v4 patch, the only complaint\nI have is that it seems a bit bulky and brute-force to duplicate\nso much code. Is it feasible to share most of the implementation\nbetween old and new functions, returning (say) an int64 that can\nthen be converted to either numeric or float8 by a wrapper? That\nwould also reduce the pressure to duplicate all the test cases.\n\n(I don't intend this complaint as a deal-breaker; Peter may well\nhave considered this alternative already and rejected it for good\nreasons.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Mar 2021 13:35:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 15.03.21 18:35, Tom Lane wrote:\n> Anyway, taking a quick look at the v4 patch, the only complaint\n> I have is that it seems a bit bulky and brute-force to duplicate\n> so much code. Is it feasible to share most of the implementation\n> between old and new functions, returning (say) an int64 that can\n> then be converted to either numeric or float8 by a wrapper? That\n> would also reduce the pressure to duplicate all the test cases.\n\nYeah, it's not straightforward to do this, because you'd also need to \ncarry around scale and infinity information, so you might end up \ncreating a mini-numeric implementation just for this.\n\nAn easy way to reduce duplication would be to convert the existing \ndate_part() into a wrapper around the new extract(), with a cast. But \nthen you'd pay the performance penalty of the numeric version.\n\nWhich leads me to: After retesting this now, with a new machine, the \nperformance of the numeric implementation is brutal compared to the \nfloat implementation, for cases where we need numeric division, which is \nmilliseconds, seconds, and epoch. In the first two cases, I imagine we \ncould rewrite this a bit to avoid a lot of the numeric work, but for the \nepoch case (which is what started this thread), there isn't enough space \nin int64 to make this work. Perhaps int128 could be pressed into \nservice, optionally. I think it would also help if we cracked open the \nnumeric APIs a bit to avoid all the repeated unpacking and packing for \neach step.\n\nSo I think we need to do a bit more thinking and work here, meaning it \nwill have to be postponed.\n\n\nHere are the kinds of tests I ran:\n\n=> select date_part('epoch', localtime + generate_series(0, 10000000) * \ninterval '1 second') \\g /dev/null\nTime: 2537.482 ms (00:02.537)\n\n=> select extract(epoch from localtime + generate_series(0, 10000000) * \ninterval '1 second') \\g /dev/null\nTime: 6106.586 ms (00:06.107)\n\n\n",
"msg_date": "Thu, 18 Mar 2021 09:28:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 18.03.21 09:28, Peter Eisentraut wrote:\n> Which leads me to: After retesting this now, with a new machine, the \n> performance of the numeric implementation is brutal compared to the \n> float implementation, for cases where we need numeric division, which is \n> milliseconds, seconds, and epoch. In the first two cases, I imagine we \n> could rewrite this a bit to avoid a lot of the numeric work, but for the \n> epoch case (which is what started this thread), there isn't enough space \n> in int64 to make this work. Perhaps int128 could be pressed into \n> service, optionally. I think it would also help if we cracked open the \n> numeric APIs a bit to avoid all the repeated unpacking and packing for \n> each step.\n> \n> So I think we need to do a bit more thinking and work here, meaning it \n> will have to be postponed.\n\nWell, I had an idea that I put to work. In most of these cases where we \nneed division, we divide an integer by a power of 10. That can be done \nwith numeric very quickly by just shifting the weight and scale around. \nSo I wrote a function that does that specifically (look for \nint64_div_fast_to_numeric()). With that, the slow cases I mentioned now \nhave the same performance as the other cases that didn't have any \nnumeric division. You just get the overhead for constructing and \npassing around a numeric instead of a double, which can't be avoided.\n\nSo here is an intermediate patch that does this. I haven't gotten rid \nof all numeric_div_opt_error() calls yet, but if this seems acceptable, \nI can work on the remaining ones.",
"msg_date": "Fri, 19 Mar 2021 20:37:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Well, I had an idea that I put to work. In most of these cases where we \n> need division, we divide an integer by a power of 10. That can be done \n> with numeric very quickly by just shifting the weight and scale around. \n> So I wrote a function that does that specifically (look for \n> int64_div_fast_to_numeric()). With that, the slow cases I mentioned now \n> have the same performance as the other cases that didn't have any \n> numeric division. You just get the overhead for constructing and \n> passing around a numeric instead of a double, which can't be avoided.\n\nYeah, I was wondering if we could do something like that, but I hadn't\ngot as far as figuring a way to deal with divisors not a multiple of\nNBASE.\n\nLooking at the proposed code, I wonder if it wouldn't be better to\ndefine the function as taking the base-10-log of the divisor, so that\nyou'd have the number of digits to shift (and the dscale) immediately\ninstead of needing repeated integer divisions to get that. Also, the\nrisk of intermediate overflow here seems annoying:\n\n+\t\tif (unlikely(pg_mul_s64_overflow(val1, NBASE/x, &val1)))\n+\t\t\telog(ERROR, \"overflow\");\n\nMaybe that's unreachable for the ranges of inputs the current patch could\ncreate, but it seems like it makes the function distinctly less\ngeneral-purpose than one would think from its comment. Maybe, if that\noverflows, we could handle the failure by making that adjustment after\nwe've converted to numeric?\n\n> So here is an intermediate patch that does this. I haven't gotten rid \n> of all numeric_div_opt_error() calls yet, but if this seems acceptable, \n> I can work on the remaining ones.\n\nI guess the immediate question is how much of a performance gap there\nis now between the float and numeric implementations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Mar 2021 16:06:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 19.03.21 21:06, Tom Lane wrote:\n> I guess the immediate question is how much of a performance gap there\n> is now between the float and numeric implementations.\n\nAttached are my test script and the full output.\n\nTo summarize, for cases that don't do any interesting computation and \nwhere the overhead is only the data type passing, the difference is like \nthis:\n\n-- old\nselect date_part('microseconds', current_timestamp + generate_series(0, \n10000000) * interval '1 second') \\g /dev/null\nTime: 2760.966 ms (00:02.761)\n\n-- new\nselect extract(microseconds from current_timestamp + generate_series(0, \n10000000) * interval '1 second') \\g /dev/null\nTime: 3178.477 ms (00:03.178)",
"msg_date": "Mon, 22 Mar 2021 21:58:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 19.03.21 21:06, Tom Lane wrote:\n>> I guess the immediate question is how much of a performance gap there\n>> is now between the float and numeric implementations.\n\n> Attached are my test script and the full output.\n\nOK ... I prefer to do this sort of timing in a way that's not so\ndependent on client I/O speeds, along the lines of\n\nselect count(date_part('day', current_date + g * interval '1 day')) from generate_series(0, :N) g;\n\nI applied the v5 patch and ran your test suite that way, producing\nthe attached results. It looks pretty promising for me, too.\nMost of the cases show about 10%-15% degradation:\n\n# select extract, date_part, extract/date_part as ratio, unit from (select sum(msec) filter (where fn = 'extract') as extract, sum(msec) filter (where fn = 'date_part') as date_part, unit from timings group by unit) ss order by ratio;\n extract | date_part | ratio | unit \n-----------+-----------+------------------------+-----------------\n 22690.100 | 20705.402 | 1.09585411575201486066 | decade\n 22810.005 | 20754.296 | 1.09904980636298142804 | century\n 11238.122 | 10190.385 | 1.10281623314526389337 | timezone_minute\n 20201.992 | 18303.982 | 1.1036938301184955 | doy\n 20121.073 | 18206.290 | 1.1051715094069138 | dow\n 23209.090 | 20915.715 | 1.10964841507928368693 | millennium\n 18839.455 | 16943.063 | 1.11192734159106886399 | week\n 20130.843 | 18010.011 | 1.1177585066438882 | isoyear\n 19755.296 | 17668.497 | 1.11810846163089027890 | isodow\n 22500.373 | 20112.264 | 1.11873894455641592612 | day\n 22631.485 | 20200.266 | 1.12035579135443067928 | month\n 22883.344 | 20407.733 | 1.12130749652594925659 | quarter\n 22628.524 | 20172.361 | 1.12175882634660365239 | year\n 26503.545 | 23493.288 | 1.12813263941598979249 | minute\n 26381.817 | 23329.924 | 1.13081452815705700542 | hour\n 27236.886 | 24070.860 | 1.13152940941869131390 | microseconds\n 11563.820 | 9948.148 | 1.1624093248311143 | timezone_hour\n 27728.212 | 23567.973 | 1.17652086583771968849 | second\n 28348.328 | 23984.219 | 1.18195751965073367617 | milliseconds\n 49902.129 | 30798.034 | 1.6203024193037776 | epoch\n 31544.035 | 18250.745 | 1.7283697186060076 | julian\n(21 rows)\n\nThe outliers are epoch and julian, which unsurprisingly are the\nones you didn't fix yet.\n\nI think a ten-percent-ish slowdown is acceptable for this purpose,\nso I think if you can address the points already raised then we're\npretty much good to go with this.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 23 Mar 2021 16:52:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 19.03.21 21:06, Tom Lane wrote:\n> Yeah, I was wondering if we could do something like that, but I hadn't\n> got as far as figuring a way to deal with divisors not a multiple of\n> NBASE.\n> \n> Looking at the proposed code, I wonder if it wouldn't be better to\n> define the function as taking the base-10-log of the divisor, so that\n> you'd have the number of digits to shift (and the dscale) immediately\n> instead of needing repeated integer divisions to get that.\n\ndone that way, much simpler now\n\n> Also, the\n> risk of intermediate overflow here seems annoying:\n> \n> +\t\tif (unlikely(pg_mul_s64_overflow(val1, NBASE/x, &val1)))\n> +\t\t\telog(ERROR, \"overflow\");\n> \n> Maybe that's unreachable for the ranges of inputs the current patch could\n> create, but it seems like it makes the function distinctly less\n> general-purpose than one would think from its comment. Maybe, if that\n> overflows, we could handle the failure by making that adjustment after\n> we've converted to numeric?\n\nalso done\n\nI also figured out a way to combine the float8 and numeric \nimplementations so that there is not so much duplication. Added tests \nto cover all the edge and overflow cases.\n\nI think this is solid now.\n\nThe extract(julian from timestamp) is still a bit in the slow mode, but \nas I previously stated, it's not documented and gives the wrong result, \nso it's not clear whether it should be fixed and what it should do. I \nthink I'll register that part as an open item in any case, to see what \nwe should do about that.",
"msg_date": "Thu, 1 Apr 2021 20:49:45 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On 01.04.21 20:49, Peter Eisentraut wrote:\n> also done\n> \n> I also figured out a way to combine the float8 and numeric \n> implementations so that there is not so much duplication. Added tests \n> to cover all the edge and overflow cases.\n> \n> I think this is solid now.\n> \n> The extract(julian from timestamp) is still a bit in the slow mode, but \n> as I previously stated, it's not documented and gives the wrong result, \n> so it's not clear whether it should be fixed and what it should do. I \n> think I'll register that part as an open item in any case, to see what \n> we should do about that.\n\ncommitted and done\n\n\n\n",
"msg_date": "Tue, 6 Apr 2021 07:48:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The extract(julian from timestamp) is still a bit in the slow mode, but \n> as I previously stated, it's not documented and gives the wrong result, \n> so it's not clear whether it should be fixed and what it should do. I \n> think I'll register that part as an open item in any case, to see what \n> we should do about that.\n\nI looked into this issue. It's not quite true that the behavior is\nentirely undocumented: Appendix B (datetime.sgml) says\n\n In the Julian Date system, each day has a sequential number, starting\n from JD 0 (which is sometimes called <emphasis>the</emphasis> Julian Date).\n JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or\n 24 November 4714 BC in the Gregorian calendar. Julian Date counting\n is most often used by astronomers for labeling their nightly observations,\n and therefore a date runs from noon UTC to the next noon UTC, rather than\n from midnight to midnight: JD 0 designates the 24 hours from noon UTC on\n 24 November 4714 BC to noon UTC on 25 November 4714 BC.\n </para>\n\n <para>\n Although <productname>PostgreSQL</productname> supports Julian Date notation for\n input and output of dates (and also uses Julian dates for some internal\n datetime calculations), it does not observe the nicety of having dates\n run from noon to noon. <productname>PostgreSQL</productname> treats a Julian Date\n as running from midnight to midnight.\n </para>\n\nThat last bit requires clarification: we treat a Julian date as running\nfrom *local* midnight to local midnight (ie in the active timezone, not\nUTC midnight). So far as I can see, the behavior of extract(julian) is\nconsistent with that definition:\n\nregression=# show timezone;\n TimeZone \n------------------\n America/New_York\n(1 row)\n\nregression=# select date_part('julian', '2021-04-19 00:00:01-04'::timestamptz);\n date_part \n-------------------\n 2459324.000011574\n(1 row)\n\nregression=# select date_part('julian', '2021-04-19 23:59:00-04'::timestamptz);\n date_part \n--------------------\n 2459324.9993055556\n(1 row)\n\nregression=# select date_part('julian', '2021-04-19'::date);\n date_part \n-----------\n 2459324\n(1 row)\n\nI don't see that to_char's J mode differs from this, either.\n\nSo I don't think there's any code change required (unless you are still\nworried about speed). What we do need is documentation fixes:\n\n* clarify the above bit about local vs UTC midnight\n\n* document the existence of the julian field for date_part/extract\n\n* fix this bit in the to_char docs to agree with reality,\nie s/UTC/local time/:\n\n <row>\n <entry><literal>J</literal></entry>\n <entry>Julian Day (integer days since November 24, 4714 BC at midnight UTC)</entry>\n </row>\n\nPerhaps it'd be worth documenting that you can get the standard\nastronomical definition of Julian date by transposing to time zone UTC-12\nbefore converting. But I think trying to change PG's behavior at this\npoint would be a bad idea.\n\n(We could also consider back-patching these doc fixes.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 11:57:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "I wrote:\n> So I don't think there's any code change required (unless you are still\n> worried about speed). What we do need is documentation fixes:\n> * clarify the above bit about local vs UTC midnight\n> * document the existence of the julian field for date_part/extract\n> * fix this bit in the to_char docs to agree with reality,\n> ie s/UTC/local time/:\n> <entry>Julian Day (integer days since November 24, 4714 BC at midnight UTC)</entry>\n> Perhaps it'd be worth documenting that you can get the standard\n> astronomical definition of Julian date by transposing to time zone UTC-12\n> before converting. But I think trying to change PG's behavior at this\n> point would be a bad idea.\n\nHere's a concrete documentation proposal covering this.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 22 Apr 2021 16:26:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "I wrote:\n>> Perhaps it'd be worth documenting that you can get the standard\n>> astronomical definition of Julian date by transposing to time zone UTC-12\n>> before converting.\n\nBTW ... I'd first thought that the way to do this was to rotate to\ntime zone UTC+12. I convinced myself on two separate days that UTC-12\nwas correct instead, but now I'm thinking I was right the first time.\nIn particular, the results I'm getting with UTC-12 don't square with\nthe example on Wikipedia [1], which says \"the Julian Date for\n00:30:00.0 UT January 1, 2013, is 2 456 293.520 833\":\n\nregression=# select extract(julian from '2013-01-01 00:30+00'::timestamptz at time zone 'utc-12');\n extract \n------------------------------\n 2456294.52083333333333333333\n(1 row)\n\nBut using UTC+12 does match:\n\nregression=# select extract(julian from '2013-01-01 00:30+00'::timestamptz at time zone 'utc+12');\n extract \n------------------------------\n 2456293.52083333333333333333\n(1 row)\n\nOf course Wikipedia has been known to contain errors, but now\nI'm inclined to think I blew this. Anyone want to check my work?\n\n\t\t\tregards, tom lane\n\n[1] https://en.wikipedia.org/wiki/Julian_day\n\n\n",
"msg_date": "Tue, 27 Apr 2021 11:56:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On Wed, Apr 28, 2021 at 3:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Of course Wikipedia has been known to contain errors, but now\n> I'm inclined to think I blew this. Anyone want to check my work?\n\nI tried a couple of examples not from Wikipedia. First, from the\ndefinition of Julian days as used by astronomers[1], counting from\nnoon on 4713-01-01 BC Julian AKA 4714-11-24 BC Gregorian, days 0 and 1\nlook right with 'utc+12':\n\npostgres=# select extract(julian from '4714-11-24 11:00:00+00\nBC'::timestamptz at time zone 'utc+12');\nERROR: timestamp out of range\npostgres=# select extract(julian from '4714-11-24 12:00:00+00\nBC'::timestamptz at time zone 'utc+12');\n extract\n--------------------------------\n 0.0000000000000000000000000000\n(1 row)\n\npostgres=# select extract(julian from '4714-11-25 11:00:00+00\nBC'::timestamptz at time zone 'utc+12');\n extract\n------------------------\n 0.95833333333333333333\n(1 row)\n\npostgres=# select extract(julian from '4714-11-25 12:00:00+00\nBC'::timestamptz at time zone 'utc+12');\n extract\n--------------------------------\n 1.0000000000000000000000000000\n(1 row)\n\nNext I found a worked example in an aerospace textbook[1] and it agrees, too:\n\npostgres=# select extract(julian from '2004-05-12\n14:45:30+00'::timestamptz at time zone 'utc+12');\n extract\n------------------------------\n 2453138.11493055555555555556\n(1 row)\n\n[1] http://curious.astro.cornell.edu/people-and-astronomy/125-observational-astronomy/timekeeping/calendars/763-how-was-the-starting-point-for-the-julian-date-system-chosen-advanced\n[2] https://www.sciencedirect.com/topics/engineering/julian-day-number\n\n\n",
"msg_date": "Wed, 28 Apr 2021 17:43:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Apr 28, 2021 at 3:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Of course Wikipedia has been known to contain errors, but now\n>> I'm inclined to think I blew this. Anyone want to check my work?\n\n> I tried a couple of examples not from Wikipedia. ...\n\nThanks for checking! I'll go adjust the documentation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Apr 2021 09:43:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On Tue, Nov 03, 2020 at 07:22:14PM -0500, Tom Lane wrote:\n> I feel like this is committable at this point --- any objections?\n\n(This became commit 40c24bf, \"Improve our ability to regurgitate SQL-syntax\nfunction calls.\")\n\n> --- a/src/backend/nodes/equalfuncs.c\n> +++ b/src/backend/nodes/equalfuncs.c\n> @@ -2369,11 +2369,12 @@ _equalFuncCall(const FuncCall *a, const FuncCall *b)\n\n> +\tCOMPARE_SCALAR_FIELD(funcformat);\n\nequalfuncs.c has been using COMPARE_COERCIONFORM_FIELD() to ignore differences\nin fields of this type. Does this spot have cause to depart from the pattern?\n\n\n",
"msg_date": "Sat, 5 Jun 2021 23:33:31 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost\n when extracting epoch"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> equalfuncs.c has been using COMPARE_COERCIONFORM_FIELD() to ignore differences\n> in fields of this type. Does this spot have cause to depart from the pattern?\n\nOversight, I think. Will fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Jun 2021 10:37:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "I wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> equalfuncs.c has been using COMPARE_COERCIONFORM_FIELD() to ignore differences\n>> in fields of this type. Does this spot have cause to depart from the pattern?\n\n> Oversight, I think. Will fix.\n\nAfter looking closer, I see that there are a couple of very very minor\nways in which parse analysis changes behavior based on the value of\nFuncCall.funcformat:\n\n* transformRangeFunction won't apply the appropriate transformation to\na multiple-argument unnest() unless the format is COERCE_EXPLICIT_CALL.\n(This is likely a no-op, though, as no grammar production that emits\nCOERCE_SQL_SYNTAX could apply to the function name \"unnest\".)\n\n* ParseFuncOrColumn will not believe that a FuncCall could_be_projection\nunless the format is COERCE_EXPLICIT_CALL. This is next door to a no-op,\nsince other restrictions such as nargs == 1 would usually suffice to\nreject COERCE_SQL_SYNTAX calls, but maybe there are corner cases where\nit'd matter.\n\nSo if you wanted to be picky you could claim that within FuncCall,\nfuncformat is semantically significant and thus that equalfuncs.c is\ncoded correctly. Nonetheless I'm inclined to think that it'd be better\nto use COMPARE_COERCIONFORM_FIELD here. I'm quite sure I didn't make\nthe above analysis when I wrote the code; using COMPARE_SCALAR_FIELD\nwas just reflex.\n\nWe could make use of COMPARE_COERCIONFORM_FIELD 100% correct by removing\nthese two tests of the funcformat value, but on the whole I doubt that\nwould be better.\n\nBTW, I'm not sure any of this matters anyway; do we ever use equal()\non raw parse trees, except for debug purposes?\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Jun 2021 12:38:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "I wrote:\n> We could make use of COMPARE_COERCIONFORM_FIELD 100% correct by removing\n> these two tests of the funcformat value, but on the whole I doubt that\n> would be better.\n\nOn still closer inspection, that seems like it'd be fine. All of\nthe gram.y productions that emit COERCE_SQL_SYNTAX also produce\nschema-qualified function names (via SystemFuncName); and it seems\nhard to see a use-case where we'd not do that. This makes the two\nchecks I cited 100% redundant, because the conditions they are in\nalso insist on an unqualified function name. So let's just take them\nout again, making it strictly OK to use COMPARE_COERCIONFORM_FIELD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Jun 2021 15:10:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when\n extracting epoch"
},
{
"msg_contents": "On Sun, Jun 06, 2021 at 03:10:07PM -0400, Tom Lane wrote:\n> I wrote:\n> > We could make use of COMPARE_COERCIONFORM_FIELD 100% correct by removing\n> > these two tests of the funcformat value, but on the whole I doubt that\n> > would be better.\n> \n> On still closer inspection, that seems like it'd be fine. All of\n> the gram.y productions that emit COERCE_SQL_SYNTAX also produce\n> schema-qualified function names (via SystemFuncName); and it seems\n> hard to see a use-case where we'd not do that. This makes the two\n> checks I cited 100% redundant, because the conditions they are in\n> also insist on an unqualified function name. So let's just take them\n> out again, making it strictly OK to use COMPARE_COERCIONFORM_FIELD.\n\nI have little intuition on this exact topic, but I have no particular concerns\nabout the change you pushed.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 22:31:55 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost\n when extracting epoch"
}
] |
[
{
"msg_contents": "This came up recently on IRC, not sure if the report there was passed on\nat all.\n\nProcessStartupPacket assumes that there will be only one negotiation\nrequest for an encrypted connection, but libpq is capable of issuing\ntwo: it will ask for GSS encryption first, if it looks like it will be\nable to do GSSAPI, and if the server refuses that it will ask (on the\nsame connection) for SSL.\n\nBut ProcessStartupPacket assumes that the packet after a failed\nnegotiation of either kind will be the actual startup packet, so the SSL\nconnection request is rejected with \"unsupported version 1234.5679\".\n\nI'm guessing this usually goes unnoticed because most people are\nprobably not set up to do GSSAPI, and those who are are probably ok with\nusing it for encryption. But if the client is set up for GSSAPI and the\nserver not, then trying to do an SSL connection will fail when it should\nsucceed, and PGGSSENCMODE=disable in the environment (or connect string)\nis necessary to get the connection to succeed.\n\nIt seems to me that this is a bug in ProcessStartupPacket, which should\naccept both GSS or SSL negotiation requests on a connection (in either\norder). Maybe secure_done should be two flags rather than one?\n\nI'm not really familiar with the GSSAPI stuff so probably someone who is\nshould take a look.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sun, 01 Dec 2019 01:13:31 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "On 2019-12-01 02:13, Andrew Gierth wrote:\n> But ProcessStartupPacket assumes that the packet after a failed\n> negotiation of either kind will be the actual startup packet, so the SSL\n> connection request is rejected with \"unsupported version 1234.5679\".\n> \n> I'm guessing this usually goes unnoticed because most people are\n> probably not set up to do GSSAPI, and those who are are probably ok with\n> using it for encryption. But if the client is set up for GSSAPI and the\n> server not, then trying to do an SSL connection will fail when it should\n> succeed, and PGGSSENCMODE=disable in the environment (or connect string)\n> is necessary to get the connection to succeed.\n> \n> It seems to me that this is a bug in ProcessStartupPacket, which should\n> accept both GSS or SSL negotiation requests on a connection (in either\n> order). Maybe secure_done should be two flags rather than one?\n\nI have also seen reports of that. I think your analysis is correct.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Dec 2019 17:06:30 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": ">>>>> \"Peter\" == Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n\n >> It seems to me that this is a bug in ProcessStartupPacket, which\n >> should accept both GSS or SSL negotiation requests on a connection\n >> (in either order). Maybe secure_done should be two flags rather than\n >> one?\n\n Peter> I have also seen reports of that. I think your analysis is\n Peter> correct.\n\nI figure something along these lines for the fix. Anyone in a position\nto test this?\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Wed, 04 Dec 2019 00:51:23 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "Greetings,\n\n* Andrew Gierth (andrew@tao11.riddles.org.uk) wrote:\n> >>>>> \"Peter\" == Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> \n> >> It seems to me that this is a bug in ProcessStartupPacket, which\n> >> should accept both GSS or SSL negotiation requests on a connection\n> >> (in either order). Maybe secure_done should be two flags rather than\n> >> one?\n> \n> Peter> I have also seen reports of that. I think your analysis is\n> Peter> correct.\n> \n> I figure something along these lines for the fix. Anyone in a position\n> to test this?\n\nAt least at first blush, I tend to agree with your analysis and patch.\n\nI'll see about getting this actually set up and tested in the next week\nor so (and maybe there's some way to also manage to have a regression\ntest for it..).\n\nThanks!\n\nStephen",
"msg_date": "Wed, 4 Dec 2019 00:24:41 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "> On 4. Dec 2019, at 06:24, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Andrew Gierth (andrew@tao11.riddles.org.uk) wrote:\n>>>>>>> \"Peter\" == Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> \n>>>> It seems to me that this is a bug in ProcessStartupPacket, which\n>>>> should accept both GSS or SSL negotiation requests on a connection\n>>>> (in either order). Maybe secure_done should be two flags rather than\n>>>> one?\n>> \n>> Peter> I have also seen reports of that. I think your analysis is\n>> Peter> correct.\n>> \n>> I figure something along these lines for the fix. Anyone in a position\n>> to test this?\n> \n> At least at first blush, I tend to agree with your analysis and patch.\n\nI agree with the patch, but this also needs to be fixed on the client side.\nOtherwise libpq won't be able to connect to older servers.\n\nI'm attaching a proposed second patch to detect the error on the client side and reconnect to this message.\n\nThis patch was first submitted as a separate thread here:\nhttps://www.postgresql.org/message-id/F27EEE9D-D04A-4B6B-B1F1-96EA4DD996D0@eggerapps.at\n\n\nJakob",
"msg_date": "Fri, 6 Dec 2019 15:35:38 +0100",
"msg_from": "Jakob Egger <jakob@eggerapps.at>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "On Sun, Dec 1, 2019 at 01:13:31AM +0000, Andrew Gierth wrote:\n> This came up recently on IRC, not sure if the report there was passed on\n> at all.\n> \n> ProcessStartupPacket assumes that there will be only one negotiation\n> request for an encrypted connection, but libpq is capable of issuing\n> two: it will ask for GSS encryption first, if it looks like it will be\n> able to do GSSAPI, and if the server refuses that it will ask (on the\n> same connection) for SSL.\n\nAre you saying that there is an additional round-trip for starting all\nSSL connections because we now support GSSAPI, or this only happens if\nlibpq asks for GSSAPI?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 20 Dec 2019 12:37:48 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Sun, Dec 1, 2019 at 01:13:31AM +0000, Andrew Gierth wrote:\n> > This came up recently on IRC, not sure if the report there was passed on\n> > at all.\n> > \n> > ProcessStartupPacket assumes that there will be only one negotiation\n> > request for an encrypted connection, but libpq is capable of issuing\n> > two: it will ask for GSS encryption first, if it looks like it will be\n> > able to do GSSAPI, and if the server refuses that it will ask (on the\n> > same connection) for SSL.\n> \n> Are you saying that there is an additional round-trip for starting all\n> SSL connections because we now support GSSAPI, or this only happens if\n> libpq asks for GSSAPI?\n\nThe way that this is intended to work is if, and only if, there's is a\nvalid GSS credentical cache (on the client side) will GSSAPI encryption\nbe attempted and then if that fails because the server doesn't support\nGSSAPI encryption of it's not possible to acquire credentials for\nwhatever reason then we'll fall back to other methods.\n\nI have heard, however, that the Applie GSS libraries are both outright\nbroken (they lie about a valid credential cache existing- claiming one\ndoes even when that's clearly not the case, based on klist..), and\ndeprecated (so they aren't likely going to fix them either..). We're\ncurrently looking to see if there's a way to basically detect the Apple\nGSS libraries and refuse to build if we discover that's what we're\nbuilding against. I'm not sure what other choice we really have...\n\nIf you gdb psql, without a Kerberos credential cache, on a system that\nhas a working GSS library, you'll note that pg_GSS_have_cred_cache()\nreturns false, meaning we skip over the GSS startup code in\nPQconnectPoll() (and drop down to trying to do SSL next).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 20 Dec 2019 13:07:58 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": ">>>>> \"Bruce\" == Bruce Momjian <bruce@momjian.us> writes:\n\n >> This came up recently on IRC, not sure if the report there was\n >> passed on at all.\n >> \n >> ProcessStartupPacket assumes that there will be only one negotiation\n >> request for an encrypted connection, but libpq is capable of issuing\n >> two: it will ask for GSS encryption first, if it looks like it will\n >> be able to do GSSAPI, and if the server refuses that it will ask (on\n >> the same connection) for SSL.\n\n Bruce> Are you saying that there is an additional round-trip for\n Bruce> starting all SSL connections because we now support GSSAPI, or\n Bruce> this only happens if libpq asks for GSSAPI?\n\nThe problem only occurs if libpq thinks it might be able to do GSSAPI,\nbut the server does not. Without the patch I proposed or something like\nit, this case fails to connect at all; with it, there will be an extra\nround-trip. Explicitly disabling GSSAPI encryption in the connection\nstring or environment avoids the issue.\n\nThe exact condition for libpq seems to be a successful call to\ngss_acquire_cred, but I'm not familiar with GSS in general.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 20 Dec 2019 18:14:09 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 06:14:09PM +0000, Andrew Gierth wrote:\n> >>>>> \"Bruce\" == Bruce Momjian <bruce@momjian.us> writes:\n> \n> >> This came up recently on IRC, not sure if the report there was\n> >> passed on at all.\n> >> \n> >> ProcessStartupPacket assumes that there will be only one negotiation\n> >> request for an encrypted connection, but libpq is capable of issuing\n> >> two: it will ask for GSS encryption first, if it looks like it will\n> >> be able to do GSSAPI, and if the server refuses that it will ask (on\n> >> the same connection) for SSL.\n> \n> Bruce> Are you saying that there is an additional round-trip for\n> Bruce> starting all SSL connections because we now support GSSAPI, or\n> Bruce> this only happens if libpq asks for GSSAPI?\n> \n> The problem only occurs if libpq thinks it might be able to do GSSAPI,\n> but the server does not. Without the patch I proposed or something like\n> it, this case fails to connect at all; with it, there will be an extra\n> round-trip. Explicitly disabling GSSAPI encryption in the connection\n> string or environment avoids the issue.\n> \n> The exact condition for libpq seems to be a successful call to\n> gss_acquire_cred, but I'm not familiar with GSS in general.\n\nThanks for the clarification from you and Stephen.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 20 Dec 2019 13:16:27 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": ">>>>> \"Stephen\" == Stephen Frost <sfrost@snowman.net> writes:\n\n >> I figure something along these lines for the fix. Anyone in a\n >> position to test this?\n\n Stephen> At least at first blush, I tend to agree with your analysis\n Stephen> and patch.\n\n Stephen> I'll see about getting this actually set up and tested in the\n Stephen> next week or so (and maybe there's some way to also manage to\n Stephen> have a regression test for it..).\n\n*poke*\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 21 Feb 2020 00:35:03 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "On Fri, Feb 21, 2020 at 12:35:03AM +0000, Andrew Gierth wrote:\n> >>>>> \"Stephen\" == Stephen Frost <sfrost@snowman.net> writes:\n> \n> >> I figure something along these lines for the fix. Anyone in a\n> >> position to test this?\n> \n> Stephen> At least at first blush, I tend to agree with your analysis\n> Stephen> and patch.\n> \n> Stephen> I'll see about getting this actually set up and tested in the\n> Stephen> next week or so (and maybe there's some way to also manage to\n> Stephen> have a regression test for it..).\n> \n> *poke*\n\nSecond *poke*.\n--\nMichael",
"msg_date": "Mon, 6 Apr 2020 15:58:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Andrew Gierth (andrew@tao11.riddles.org.uk) wrote:\n> > >>>>> \"Peter\" == Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > \n> > >> It seems to me that this is a bug in ProcessStartupPacket, which\n> > >> should accept both GSS or SSL negotiation requests on a connection\n> > >> (in either order). Maybe secure_done should be two flags rather than\n> > >> one?\n> > \n> > Peter> I have also seen reports of that. I think your analysis is\n> > Peter> correct.\n> > \n> > I figure something along these lines for the fix. Anyone in a position\n> > to test this?\n> \n> At least at first blush, I tend to agree with your analysis and patch.\n> \n> I'll see about getting this actually set up and tested in the next week\n> or so (and maybe there's some way to also manage to have a regression\n> test for it..).\n\nAfter testing this and playing around with it a bit, I've gone ahead and\npushed it.\n\nThanks!\n\nStephen",
"msg_date": "Sat, 2 May 2020 14:02:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Protocol problem with GSSAPI encryption?"
}
] |
[
{
"msg_contents": "I started to fool around with the ruleutils.c rewrite discussed in [1],\nand ran into an independent bug: if you attach column aliases to a\npartitioned table, and some of the partitions don't have an exact match of\ncolumn attnums, then EXPLAIN uses the wrong aliases for those partitions.\nAs an example, after modifying partition_prune.sql like this:\n\ndiff --git a/src/test/regress/sql/partition_prune.sql b/src/test/regress/sql/partition_prune.sql\nindex a5900e5..41f0b6f 100644\n--- a/src/test/regress/sql/partition_prune.sql\n+++ b/src/test/regress/sql/partition_prune.sql\n@@ -202,8 +202,13 @@ CREATE TABLE part (a INT, b INT) PARTITION BY LIST (a);\n CREATE TABLE part_p1 PARTITION OF part FOR VALUES IN (-2,-1,0,1,2);\n CREATE TABLE part_p2 PARTITION OF part DEFAULT PARTITION BY RANGE(a);\n CREATE TABLE part_p2_p1 PARTITION OF part_p2 DEFAULT;\n+CREATE TABLE part_rev (b INT, c INT, a INT);\n+ALTER TABLE part ATTACH PARTITION part_rev FOR VALUES IN (3); -- fail\n+ALTER TABLE part_rev DROP COLUMN c;\n+ALTER TABLE part ATTACH PARTITION part_rev FOR VALUES IN (3); -- now it's ok\n INSERT INTO part VALUES (-1,-1), (1,1), (2,NULL), (NULL,-2),(NULL,NULL);\n EXPLAIN (COSTS OFF) SELECT tableoid::regclass as part, a, b FROM part WHERE a IS NULL ORDER BY 1, 2, 3;\n+EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM part p(x) ORDER BY x;\n \n --\n -- some more cases\n\nthen the EXPLAIN output produced by HEAD looks like:\n\n QUERY PLAN \n-----------------------------------------------\n Sort\n Output: p.x, p.b\n Sort Key: p.x\n -> Append\n -> Seq Scan on public.part_p1 p\n Output: p.x, p.b\n -> Seq Scan on public.part_rev p_1\n Output: p_1.a, p_1.x\n -> Seq Scan on public.part_p2_p1 p_2\n Output: p_2.x, p_2.b\n(10 rows)\n\nwherein the \"x\" alias for column \"a\" has been applied to part_rev.b.\nThat's wrong and confusing.\n\nThe reason this happens is that expand_single_inheritance_child()\njust clones the parent RTE's alias node without any thought for\nthe possibility that the columns don't match one-to-one. It's\nan ancient bug that affects traditional inheritance as well as\npartitioning.\n\nI experimented with fixing this by making expand_single_inheritance_child\ngenerate a correctly-adjusted child alias node, which seems reasonable\nsince it takes pains to adjust the rest of the child RTE for the different\ncolumn layout. It turns out to be slightly tedious to do that without\ncausing a lot of regression test diffs: if we add an alias node where\nthere was none before, that affects ruleutils.c's selection of table\naliases not just column aliases. The \"variant-a\" patch below mostly\nsucceeds in avoiding test diffs, but it adds a fair amount of complication\nto inherit.c. The \"variant-b\" patch below uses a simpler way of setting\nup the child aliases, which results in a whole lot of test diffs: all\nchildren of a parent named \"x\" will now show in EXPLAIN with aliases like\n\"x_1\", \"x_2\", etc. (That happens already if you wrote an explicit table\nalias \"x\", but not if you didn't.) While my initial reaction was that\nthat was an unacceptable amount of churn, the idea gets more appealing the\nmore I think about it. It means that tables you did not name in the query\nwill be shown with aliases that clearly identify their connection to\nsomething you did name. So despite the added churn, I'm kind of attracted\nto the variant-b approach. (Note that the discussion in [1] is almost\ncertainly going to result in some changes to ruleutils.c's alias-selection\nbehavior anyway, so I don't think staying exactly compatible with v12 is\nworth much here.)\n\nOn the other hand, if we went with variant-a it might be plausible to\nback-patch this fix. But given the fact that this is a mostly cosmetic\nproblem, and we've not had field complaints, I don't feel a big need\nto fix it in the back branches.\n\nSome other loose ends:\n\n* variant-a's diffs in expected/postgres_fdw.out indicate that\npostgres_fdw is doing something weird with the table aliases it selects to\nprint in the \"Relations:\" output. I think this is an independent bug ---\nand it surely *is* a bug, because the aliases printed by HEAD don't agree\nwith the table aliases used for Vars of those relations. But I haven't\nrun it to ground yet. (Notice that variant-b fixes those discrepancies in\nthe opposite direction...)\n\n* To make computing the modified column alias list cheap, I made\nmake_inh_translation_list() compute a reverse-mapping array showing the\nparent column associated with each child column. I'm more than a little\nbit tempted to add that array to the AppendRelInfo struct, instead of\npassing it back separately and then discarding it. We could use the info\nto simplify and speed up the reverse-mapping logic added by commit\n553d2ec27, and I bet there will be other use-cases later. But I've not\ndone that in these patches.\n\nThoughts, objections, better ideas?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/001001d4f44b%242a2cca50%247e865ef0%24%40lab.ntt.co.jp",
"msg_date": "Sat, 30 Nov 2019 21:40:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bogus EXPLAIN results with column aliases for mismatched partitions"
},
{
"msg_contents": "I wrote:\n> * variant-a's diffs in expected/postgres_fdw.out indicate that\n> postgres_fdw is doing something weird with the table aliases it selects to\n> print in the \"Relations:\" output. I think this is an independent bug ---\n> and it surely *is* a bug, because the aliases printed by HEAD don't agree\n> with the table aliases used for Vars of those relations. But I haven't\n> run it to ground yet. (Notice that variant-b fixes those discrepancies in\n> the opposite direction...)\n\nI checked that, and indeed postgres_fdw is doing something randomly\ndifferent from what ruleutils does. In set_rtable_names(), the\nfirst priority is rte->alias->aliasname, and if that's not set\nthen (for a RELATION RTE) you get the result of get_rel_name().\npostgres_fdw is taking rte->eref->aliasname as being the alias,\nwhich is usually the same string, but \"usually\" doesn't cut it.\nSo we should make it look at rte->alias instead.\n\nNow, there is another thing that set_rtable_names() is doing that\npostgres_fdw doesn't do, which is to unique-ify the chosen alias\nnames by adding \"_NN\" if the querytree contains multiple uses of\nthe same table alias. I don't see any practical way for postgres_fdw\nto match that behavior, since it lacks global information about the\nquery. If we wanted to fix it, what we'd likely need to do is\npostpone creation of the relation_name strings until EXPLAIN,\nproviding some way for postgres_fdw to ask ruleutils.c what alias\nit'd assigned to a particular RTE. This seems like it wouldn't be\nterribly painful for base relations but it'd be a mess for joins\nand aggregates, so I'm not eager to do something like that.\nIn practice the presence or absence of \"_NN\" might not be too\nconfusing --- it's certainly not as bad as the inconsistency being\nshown now.\n\nIn short then I propose the attached fix for this issue.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 01 Dec 2019 14:38:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
},
{
"msg_contents": "I wrote:\n> Now, there is another thing that set_rtable_names() is doing that\n> postgres_fdw doesn't do, which is to unique-ify the chosen alias\n> names by adding \"_NN\" if the querytree contains multiple uses of\n> the same table alias. I don't see any practical way for postgres_fdw\n> to match that behavior, since it lacks global information about the\n> query. If we wanted to fix it, what we'd likely need to do is\n> postpone creation of the relation_name strings until EXPLAIN,\n> providing some way for postgres_fdw to ask ruleutils.c what alias\n> it'd assigned to a particular RTE.\n\nHmmm ... so actually, that isn't *quite* as bad as I thought:\nExplainState does already expose that information, so we just need\nto rearrange how postgres_fdw does things. Here's a revised proposed\npatch, which exposes (and fixes) several additional test cases where\nthe Relations: string was previously visibly out of sync with what\nruleutils was using for Var names.\n\nBTW, the existing code always schema-qualifies the relation names,\non the rather lame grounds that it's producing the string without\nknowing whether EXPLAIN VERBOSE will be specified. In this code,\nthe verbose flag is available so it would be trivial to make the\noutput conform to EXPLAIN's normal policy. I didn't change that\nhere because there'd be a bunch more test output diffs of no\nintellectual interest. Should we change it, or leave well enough\nalone?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 01 Dec 2019 16:34:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
},
{
"msg_contents": "On Sun, Dec 1, 2019 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> then the EXPLAIN output produced by HEAD looks like:\n>\n> QUERY PLAN\n> -----------------------------------------------\n> Sort\n> Output: p.x, p.b\n> Sort Key: p.x\n> -> Append\n> -> Seq Scan on public.part_p1 p\n> Output: p.x, p.b\n> -> Seq Scan on public.part_rev p_1\n> Output: p_1.a, p_1.x\n> -> Seq Scan on public.part_p2_p1 p_2\n> Output: p_2.x, p_2.b\n> (10 rows)\n>\n> wherein the \"x\" alias for column \"a\" has been applied to part_rev.b.\n> That's wrong and confusing.\n\nAgreed.\n\n> The reason this happens is that expand_single_inheritance_child()\n> just clones the parent RTE's alias node without any thought for\n> the possibility that the columns don't match one-to-one. It's\n> an ancient bug that affects traditional inheritance as well as\n> partitioning.\n>\n> I experimented with fixing this by making expand_single_inheritance_child\n> generate a correctly-adjusted child alias node, which seems reasonable\n> since it takes pains to adjust the rest of the child RTE for the different\n> column layout. It turns out to be slightly tedious to do that without\n> causing a lot of regression test diffs: if we add an alias node where\n> there was none before, that affects ruleutils.c's selection of table\n> aliases not just column aliases. The \"variant-a\" patch below mostly\n> succeeds in avoiding test diffs, but it adds a fair amount of complication\n> to inherit.c. The \"variant-b\" patch below uses a simpler way of setting\n> up the child aliases, which results in a whole lot of test diffs: all\n> children of a parent named \"x\" will now show in EXPLAIN with aliases like\n> \"x_1\", \"x_2\", etc. (That happens already if you wrote an explicit table\n> alias \"x\", but not if you didn't.) While my initial reaction was that\n> that was an unacceptable amount of churn, the idea gets more appealing the\n> more I think about it. It means that tables you did not name in the query\n> will be shown with aliases that clearly identify their connection to\n> something you did name. So despite the added churn, I'm kind of attracted\n> to the variant-b approach. (Note that the discussion in [1] is almost\n> certainly going to result in some changes to ruleutils.c's alias-selection\n> behavior anyway, so I don't think staying exactly compatible with v12 is\n> worth much here.)\n>\n> On the other hand, if we went with variant-a it might be plausible to\n> back-patch this fix. But given the fact that this is a mostly cosmetic\n> problem, and we've not had field complaints, I don't feel a big need\n> to fix it in the back branches.\n\nI tend to agree that \"variant b\" is more appealing for the reason that\nit makes the connection between child RTEs and that of the table named\nin the query from which they originate more explicit.\n\n> * To make computing the modified column alias list cheap, I made\n> make_inh_translation_list() compute a reverse-mapping array showing the\n> parent column associated with each child column. I'm more than a little\n> bit tempted to add that array to the AppendRelInfo struct, instead of\n> passing it back separately and then discarding it. We could use the info\n> to simplify and speed up the reverse-mapping logic added by commit\n> 553d2ec27, and I bet there will be other use-cases later. But I've not\n> done that in these patches.\n\n+1\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 2 Dec 2019 12:05:06 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
},
{
"msg_contents": "On Mon, Dec 2, 2019 at 6:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Now, there is another thing that set_rtable_names() is doing that\n> > postgres_fdw doesn't do, which is to unique-ify the chosen alias\n> > names by adding \"_NN\" if the querytree contains multiple uses of\n> > the same table alias. I don't see any practical way for postgres_fdw\n> > to match that behavior, since it lacks global information about the\n> > query. If we wanted to fix it, what we'd likely need to do is\n> > postpone creation of the relation_name strings until EXPLAIN,\n> > providing some way for postgres_fdw to ask ruleutils.c what alias\n> > it'd assigned to a particular RTE.\n>\n> Hmmm ... so actually, that isn't *quite* as bad as I thought:\n> ExplainState does already expose that information, so we just need\n> to rearrange how postgres_fdw does things. Here's a revised proposed\n> patch, which exposes (and fixes) several additional test cases where\n> the Relations: string was previously visibly out of sync with what\n> ruleutils was using for Var names.\n\nOne thing I noticed is this comment:\n\n /*\n * Add names of relation handled by the foreign scan when the scan is a\n- * join\n+ * join. The input looks something like \"(1) LEFT JOIN (2)\", and we must\n+ * replace the digit strings with the correct relation names. We do that\n+ * here, not when the plan is created, because we can't know what aliases\n+ * ruleutils.c will assign at plan creation time.\n */\n\nI think \"names of relation\" should be \"names of relations\", so how\nabout fixing that as well? Other than that the patch looks good to\nme. Thanks for working on this! (Actually, we discussed this before.\nSee [1]. I wasn't able to come up with a solution, though.)\n\n> BTW, the existing code always schema-qualifies the relation names,\n> on the rather lame grounds that it's producing the string without\n> knowing whether EXPLAIN VERBOSE will be specified. In this code,\n> the verbose flag is available so it would be trivial to make the\n> output conform to EXPLAIN's normal policy. I didn't change that\n> here because there'd be a bunch more test output diffs of no\n> intellectual interest. Should we change it, or leave well enough\n> alone?\n\nI think it would be better to keep that as-is because otherwise, in\ncase of a foreign join or aggregate, EXPLAIN without the VERBOSE\noption won't show any information about foreign tables involved in\nthat foreign join or aggregate, which isn't useful for users.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/c2c7191b-5ca0-b37a-9e9d-4df15ffb554b%40lab.ntt.co.jp\n\n\n",
"msg_date": "Mon, 2 Dec 2019 20:44:55 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> I think \"names of relation\" should be \"names of relations\", so how\n> about fixing that as well?\n\nAh, missed that.\n\n> On Mon, Dec 2, 2019 at 6:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, the existing code always schema-qualifies the relation names,\n>> on the rather lame grounds that it's producing the string without\n>> knowing whether EXPLAIN VERBOSE will be specified. In this code,\n>> the verbose flag is available so it would be trivial to make the\n>> output conform to EXPLAIN's normal policy. I didn't change that\n>> here because there'd be a bunch more test output diffs of no\n>> intellectual interest. Should we change it, or leave well enough\n>> alone?\n\n> I think it would be better to keep that as-is because otherwise, in\n> case of a foreign join or aggregate, EXPLAIN without the VERBOSE\n> option won't show any information about foreign tables involved in\n> that foreign join or aggregate, which isn't useful for users.\n\nNo, I'm just talking about dropping the schema-qualification of table\nnames when !es->verbose, not removing the Relations: output altogether.\nThat would be more consistent with the rest of EXPLAIN's output.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 09:41:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
},
{
"msg_contents": "I wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n>> On Mon, Dec 2, 2019 at 6:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> BTW, the existing code always schema-qualifies the relation names,\n>>> on the rather lame grounds that it's producing the string without\n>>> knowing whether EXPLAIN VERBOSE will be specified. In this code,\n>>> the verbose flag is available so it would be trivial to make the\n>>> output conform to EXPLAIN's normal policy. I didn't change that\n>>> here because there'd be a bunch more test output diffs of no\n>>> intellectual interest. Should we change it, or leave well enough\n>>> alone?\n\n>> I think it would be better to keep that as-is because otherwise, in\n>> case of a foreign join or aggregate, EXPLAIN without the VERBOSE\n>> option won't show any information about foreign tables involved in\n>> that foreign join or aggregate, which isn't useful for users.\n\n> No, I'm just talking about dropping the schema-qualification of table\n> names when !es->verbose, not removing the Relations: output altogether.\n> That would be more consistent with the rest of EXPLAIN's output.\n\nConcretely, I'm thinking of the attached (on top of the other patch,\nwhich I just pushed). This agrees exactly with what ExplainTargetRel\ndoes for regular scans.\n\nOne could make an argument that we should schema-qualify, regardless\nof the \"verbose\" flag, if the output format isn't EXPLAIN_FORMAT_TEXT.\nThat would reduce the amount of variability that plan analysis tools\nhave to cope with. However, ExplainTargetRel itself doesn't provide\nthe schema in non-verbose mode. Maybe it should, ie we should change\nit like\n\n case T_ModifyTable:\n /* Assert it's on a real relation */\n Assert(rte->rtekind == RTE_RELATION);\n objectname = get_rel_name(rte->relid);\n- if (es->verbose)\n+ if (es->verbose || es->format != EXPLAIN_FORMAT_TEXT)\n namespace = get_namespace_name(get_rel_namespace(rte->relid));\n objecttag = \"Relation Name\";\n break;\n\n(and likewise for function schema names, a bit further down)?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 02 Dec 2019 16:45:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 6:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> >> On Mon, Dec 2, 2019 at 6:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> BTW, the existing code always schema-qualifies the relation names,\n> >>> on the rather lame grounds that it's producing the string without\n> >>> knowing whether EXPLAIN VERBOSE will be specified. In this code,\n> >>> the verbose flag is available so it would be trivial to make the\n> >>> output conform to EXPLAIN's normal policy. I didn't change that\n> >>> here because there'd be a bunch more test output diffs of no\n> >>> intellectual interest. Should we change it, or leave well enough\n> >>> alone?\n>\n> >> I think it would be better to keep that as-is because otherwise, in\n> >> case of a foreign join or aggregate, EXPLAIN without the VERBOSE\n> >> option won't show any information about foreign tables involved in\n> >> that foreign join or aggregate, which isn't useful for users.\n>\n> > No, I'm just talking about dropping the schema-qualification of table\n> > names when !es->verbose, not removing the Relations: output altogether.\n> > That would be more consistent with the rest of EXPLAIN's output.\n\nSorry, I misread the meaning.\n\n> Concretely, I'm thinking of the attached (on top of the other patch,\n> which I just pushed). This agrees exactly with what ExplainTargetRel\n> does for regular scans.\n\nThanks for the patch! (The patch didn't apply to HEAD cleanly,\nthough.) I like consistency, so +1 for the change.\n\n> One could make an argument that we should schema-qualify, regardless\n> of the \"verbose\" flag, if the output format isn't EXPLAIN_FORMAT_TEXT.\n> That would reduce the amount of variability that plan analysis tools\n> have to cope with. However, ExplainTargetRel itself doesn't provide\n> the schema in non-verbose mode. Maybe it should, ie we should change\n> it like\n>\n> case T_ModifyTable:\n> /* Assert it's on a real relation */\n> Assert(rte->rtekind == RTE_RELATION);\n> objectname = get_rel_name(rte->relid);\n> - if (es->verbose)\n> + if (es->verbose || es->format != EXPLAIN_FORMAT_TEXT)\n> namespace = get_namespace_name(get_rel_namespace(rte->relid));\n> objecttag = \"Relation Name\";\n> break;\n>\n> (and likewise for function schema names, a bit further down)?\n\nSeems like another issue to me.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 3 Dec 2019 13:13:43 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Tue, Dec 3, 2019 at 6:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Concretely, I'm thinking of the attached (on top of the other patch,\n>> which I just pushed). This agrees exactly with what ExplainTargetRel\n>> does for regular scans.\n\n> Thanks for the patch! (The patch didn't apply to HEAD cleanly,\n> though.) I like consistency, so +1 for the change.\n\nYeah, 55a1954da probably changed the expected output from what that\nhas. I'll clean it up and push.\n\n>> One could make an argument that we should schema-qualify, regardless\n>> of the \"verbose\" flag, if the output format isn't EXPLAIN_FORMAT_TEXT.\n>> That would reduce the amount of variability that plan analysis tools\n>> have to cope with. However, ExplainTargetRel itself doesn't provide\n>> the schema in non-verbose mode. Maybe it should, ie we should change\n>> it like ...\n\n> Seems like another issue to me.\n\nAgreed. When/if we change that, we could make this code follow along.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Dec 2019 12:12:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus EXPLAIN results with column aliases for mismatched\n partitions"
}
] |
[
{
"msg_contents": "The code in GetSnapshotData() that read the `xid` field of PGXACT struct has a dependency on code in GetNewTransactionId() that write `MyPgXact->xid`. It means that the store of xid should happen before the load of it. In C11, we can use [Release-Acquire ordering](https://en.cppreference.com/w/c/atomic/memory_order#Release-Acquire_ordering) to achieve it. But now, there is no special operation to do it(, and the [volatile](https://en.cppreference.com/w/c/language/volatile) keyword should not play any role in this situation).\r\n\r\n\r\nSo it means that when a backend A returns from GetNewTransactionId(), the newval of `MyPgXact->xid` maybe just in CPU store buffer, or CPU cache line, so the newval is not yet visible to backend B that calling GetSnapshotData(). So the assumption that 'all top-level XIDs <= latestCompletedXid are either present in the ProcArray, or not running anymore' may not be safe. \nThe code in GetSnapshotData() that read the `xid` field of PGXACT struct has a dependency on code in GetNewTransactionId() that write `MyPgXact->xid`. It means that the store of xid should happen before the load of it. In C11, we can use [Release-Acquire ordering](https://en.cppreference.com/w/c/atomic/memory_order#Release-Acquire_ordering) to achieve it. But now, there is no special operation to do it(, and the [volatile](https://en.cppreference.com/w/c/language/volatile) keyword should not play any role in this situation).So it means that when a backend A returns from GetNewTransactionId(), the newval of `MyPgXact->xid` maybe just in CPU store buffer, or CPU cache line, so the newval is not yet visible to backend B that calling GetSnapshotData(). So the assumption that 'all top-level XIDs <= latestCompletedXid are either present in the ProcArray, or not running anymore' may not be safe.",
"msg_date": "Sun, 1 Dec 2019 15:31:55 +0800",
"msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>",
"msg_from_op": true,
"msg_subject": "Issue about memory order on ARM"
},
{
"msg_contents": "\"=?utf-8?B?55uP5LiA?=\" <w@www.hidva.com> writes:\n> The code in GetSnapshotData() that read the `xid` field of PGXACT struct has a dependency on code in GetNewTransactionId() that write `MyPgXact->xid`. It means that the store of xid should happen before the load of it. In C11, we can use [Release-Acquire ordering](https://en.cppreference.com/w/c/atomic/memory_order#Release-Acquire_ordering) to achieve it. But now, there is no special operation to do it(, and the [volatile](https://en.cppreference.com/w/c/language/volatile) keyword should not play any role in this situation).\n> So it means that when a backend A returns from GetNewTransactionId(), the newval of `MyPgXact->xid` maybe just in CPU store buffer, or CPU cache line, so the newval is not yet visible to backend B that calling GetSnapshotData(). So the assumption that 'all top-level XIDs <= latestCompletedXid are either present in the ProcArray, or not running anymore' may not be safe. \n\nYou'e ignoring the memory barriers that are implicit in LWLock\nacquisition and release; as well as the fact that it's transaction\nend, not start, that needs to be interlocked. Please read the section\n\"Interlocking Transaction Begin, Transaction End, and Snapshots\"\nin src/backend/access/transam/README.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Dec 2019 10:49:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue about memory order on ARM"
},
{
"msg_contents": "Sorry to bother you, now I know that there is no problem here.\r\n\r\nThe model for reading and writing of PGXACT::xid and ShmemVariableCache->latestCompletedXid can be simplified as follows:\r\n\r\n backend A backend B backend C\r\n wlock(XidGenLock); wlock(XidGenLock); rlock(ProcArrayLock);\r\n write APgXact->xid; write BPgXact->xid; read latestCompletedXid;\r\n unlock(XidGenLock); unlock(XidGenLock); read APgXact->xid;\r\n ... read BPgXact->xid;\r\n wlock(ProcArrayLock); unlock(ProcArrayLock);\r\n write latestCompletedXid;\r\n unlock(ProcArrayLock);\r\n\r\nMy previous problem was that C might not be able to see the value of APgXact->xid written by A because there was no obvious acquire-release operation during this. But now I find that there are already some acquire-release operations here. Because of the `unlock(XidGenLock)` in A and `wlock(XidGenLock)` in B and the rules introduced in [Inter-thread happens-before](https://en.cppreference.com/w/cpp/atomic/memory_order), we can know that the `write APgXact->xid` in A inter-thread happens before `write BPgXact->xid` in B. And `write BPgXact->xid` is sequenced before `write latestCompletedXid` in B according to rules introduced in [Sequenced-before rules](https://en.cppreference.com/w/cpp/language/eval_order). And similarly `write latestCompletedXid` in B inter-thread happens before `read latestCompletedXid` in C. So the `write APgXact->xid` in A inter-thread happens before `read APgXact->xid` in C. So C can see the value of APgXact->xid written by A.",
"msg_date": "Mon, 2 Dec 2019 12:22:37 +0800",
"msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>",
"msg_from_op": true,
"msg_subject": "Re:Issue about memory order on ARM"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile evaluating one of the CF patches (the incremental sort one, but\nthat's mostly irrelevant), I ran into a strange issue with join planning\nfor a fairly simple query. I needed to asses how the patch affects query\nplanning for different GUCs, so I ran a group of queries and stashed the\nresults into a table with this structure\n\n CREATE TABLE plans (\n query text,\n index text,\n option text,\n plan text,\n type text,\n force boolean,\n parallel boolean);\n\nEssentially all the columns are 'dimensions' with the exception of the\n'plan' column storing the explain plan generated.\n\nThe results (~60k rows / 30MB) is available here:\n\n https://drive.google.com/open?id=1Q4oR1KtaAil87lbMo-xUvvw_0wf_zDx-\n\n copy plans from '/tmp/results-100M.data';\n\nTo evaluate results, I needed to see which GUCs result in a different\nplan compared to the master, so I did a query like this:\n\n with\n master AS (select * from plans where option = ''),\n create_ordered_paths_parallel AS (select * from plans where option = 'devel_create_ordered_paths_parallel'),\n create_partial_grouping_paths_2 AS (select * from plans where option = 'devel_create_partial_grouping_paths_2'),\n create_partial_grouping_paths AS (select * from plans where option = 'devel_create_partial_grouping_paths'),\n standard_join_search AS (select * from plans where option = 'devel_standard_join_search'),\n add_paths_to_grouping_rel AS (select * from plans where option = 'devel_add_paths_to_grouping_rel'),\n gather_grouping_paths AS (select * from plans where option = 'devel_gather_grouping_paths'),\n create_ordered_paths AS (select * from plans where option = 'devel_create_ordered_paths'),\n add_paths_to_grouping_rel_parallel AS (select * from plans where option = 'devel_add_paths_to_grouping_rel_parallel'),\n set_rel_pathlist AS (select * from plans where option = 'devel_set_rel_pathlist'),\n apply_scanjoin_target_to_paths AS (select * from plans where option = 'devel_apply_scanjoin_target_to_paths')\n select\n master.query,\n master.index,\n master.type,\n master.force,\n master.parallel,\n md5(master.plan),\n (CASE WHEN (master.plan = r1.plan) THEN NULL ELSE 'DIFF' END) guc1,\n (CASE WHEN (master.plan = r2.plan) THEN NULL ELSE 'DIFF' END) guc2,\n (CASE WHEN (master.plan = r3.plan) THEN NULL ELSE 'DIFF' END) guc3,\n (CASE WHEN (master.plan = r4.plan) THEN NULL ELSE 'DIFF' END) guc4,\n (CASE WHEN (master.plan = r5.plan) THEN NULL ELSE 'DIFF' END) guc5,\n (CASE WHEN (master.plan = r6.plan) THEN NULL ELSE 'DIFF' END) guc6,\n (CASE WHEN (master.plan = r7.plan) THEN NULL ELSE 'DIFF' END) guc7,\n (CASE WHEN (master.plan = r8.plan) THEN NULL ELSE 'DIFF' END) guc8,\n (CASE WHEN (master.plan = r9.plan) THEN NULL ELSE 'DIFF' END) guc9,\n (CASE WHEN (master.plan = r10.plan) THEN NULL ELSE 'DIFF' END) guc10\n from\n master\n join create_ordered_paths_parallel r1 using (query, index, type, force, parallel)\n join create_partial_grouping_paths r2 using (query, index, type, force, parallel)\n join create_partial_grouping_paths_2 r3 using (query, index, type, force, parallel)\n join standard_join_search r4 using (query, index, type, force, parallel)\n join add_paths_to_grouping_rel r5 using (query, index, type, force, parallel)\n join gather_grouping_paths r6 using (query, index, type, force, parallel)\n join create_ordered_paths r7 using (query, index, type, force, parallel)\n join add_paths_to_grouping_rel_parallel r8 using (query, index, type, force, parallel)\n join set_rel_pathlist r9 using (query, index, type, force, parallel)\n join apply_scanjoin_target_to_paths r10 using (query, index, type, force, parallel);\n\nThis however causes pretty serious issues during planning. Firstly, it\nconsumes insane amounts of memory, to the extent that on my machine it\ncrashes due to OOM.\n\nIf I lover the join_collapse_limit to 1, it works just fine, but once I\nincrease it too much, the memory consumption and planning time go\nthrough the roof and eventually crashes.\n\nI did a bit of investigation, and after instrumenting aset.c a bit I got\na statistic like this:\n\n size | alloc count | alloc sum | free count | free sum | diff\n -------+-------------+-----------+------------+----------+-----------\n 64 | 5606157 | 358794048 | 118 | 7552 | 358786496\n\ni.e. there's a lot of 64B chunks allocated, but almost none of them are\nfreed, resulting in ~350MB leak. There are various other sizes with a\nlot of allocated chunks, but nowhere close to this.\n\nIt seems most of this comesfrom find_mergeclauses_for_outer_pathkeys()\nwhich builds matched_restrictinfos and then just leaves it allocated.\nAfter pfreeing this (see attached patch), the memory usage gets way down\nand the query completes. I'm sure there are other things we could pfree\nto reduce the memory usage even more.\n\nThat being said, I wonder if this is worth it - the query is a bit\nstrange and probably could be rewritten, and we generally don't free\nstuff in the planner very aggressively. OTOH consuming gigabytes of\nmemory and crashing with OOM is no fun.\n\nThe other issue is planning time, which looks like this:\n\n join_collapse_limit = 1 3.698 ms\n join_collapse_limit = 2 3.109 ms\n join_collapse_limit = 3 5.244 ms\n join_collapse_limit = 4 9.025 ms\n join_collapse_limit = 5 29.371 ms\n join_collapse_limit = 6 83.190 ms\n join_collapse_limit = 7 93.693 ms\n join_collapse_limit = 8 253.369 ms\n join_collapse_limit = 9 760.415 ms\n join_collapse_limit = 10 2385.519 ms\n join_collapse_limit = 11 7880.276 ms\n\ncompared to the execution time (consistently ~800ms) this is pretty\nhigh, and it'd get considerably worse with more tables (it'd start\ncrashing because of OOM too).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 1 Dec 2019 18:01:12 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "surprisingly expensive join planning query"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> It seems most of this comesfrom find_mergeclauses_for_outer_pathkeys()\n> which builds matched_restrictinfos and then just leaves it allocated.\n> After pfreeing this (see attached patch), the memory usage gets way down\n> and the query completes.\n\nInteresting. The memory leak was probably much less bad before\ncommit 1cff1b95a, since in the old List implementation this code\nwould have leaked only a list header. It makes sense to me to\nadd the list_free.\n\nAlternatively, it'd be possible to get rid of the separate List\naltogether, and just add the rinfo's to \"mergeclauses\" immediately.\nThe functionality of the separate list could be replaced by a\nbool variable remembering whether we found any matches in this\npass through the loop. I think the code would be a little less\nclear that way, but this report makes it clear that it's a\nperformance bottleneck, so maybe we should just change it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Dec 2019 13:27:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: surprisingly expensive join planning query"
},
{
"msg_contents": "On Sun, Dec 01, 2019 at 01:27:04PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> It seems most of this comesfrom find_mergeclauses_for_outer_pathkeys()\n>> which builds matched_restrictinfos and then just leaves it allocated.\n>> After pfreeing this (see attached patch), the memory usage gets way down\n>> and the query completes.\n>\n>Interesting. The memory leak was probably much less bad before\n>commit 1cff1b95a, since in the old List implementation this code\n>would have leaked only a list header. It makes sense to me to\n>add the list_free.\n>\n\nI forgot to mention I tried on older releases, up to 9.5 (I suspected it\nmight be related to parallel queries), and I get OOM crashes there too.\nI can't say if the memory is leaking slower/faster, though.\n\nI tried fixing 9.5 - a simple pfree(matched_restrictinfos) triggers some\nsort of list_concat error for me, seemed a bit weird TBH.\n\n>Alternatively, it'd be possible to get rid of the separate List\n>altogether, and just add the rinfo's to \"mergeclauses\" immediately.\n>The functionality of the separate list could be replaced by a\n>bool variable remembering whether we found any matches in this\n>pass through the loop. I think the code would be a little less\n>clear that way, but this report makes it clear that it's a\n>performance bottleneck, so maybe we should just change it.\n>\n\nYes, that might be an option. And it works even on 9.5 for me (per the\nattached patch). I don't think it's much less clear compared to just\ndoing an explicit free at the end.\n\nIt does fix cases with up to join_collapse_limit = 10, but with 11 it\nstill OOM-crashes. That definitely depends on available memory, of\ncourse.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 1 Dec 2019 20:05:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: surprisingly expensive join planning query"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Dec 01, 2019 at 01:27:04PM -0500, Tom Lane wrote:\n>> Alternatively, it'd be possible to get rid of the separate List\n>> altogether, and just add the rinfo's to \"mergeclauses\" immediately.\n>> The functionality of the separate list could be replaced by a\n>> bool variable remembering whether we found any matches in this\n>> pass through the loop. I think the code would be a little less\n>> clear that way, but this report makes it clear that it's a\n>> performance bottleneck, so maybe we should just change it.\n\n> Yes, that might be an option. And it works even on 9.5 for me (per the\n> attached patch). I don't think it's much less clear compared to just\n> doing an explicit free at the end.\n\nLGTM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Dec 2019 14:17:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: surprisingly expensive join planning query"
},
{
"msg_contents": "On Sun, Dec 01, 2019 at 02:17:15PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Sun, Dec 01, 2019 at 01:27:04PM -0500, Tom Lane wrote:\n>>> Alternatively, it'd be possible to get rid of the separate List\n>>> altogether, and just add the rinfo's to \"mergeclauses\" immediately.\n>>> The functionality of the separate list could be replaced by a\n>>> bool variable remembering whether we found any matches in this\n>>> pass through the loop. I think the code would be a little less\n>>> clear that way, but this report makes it clear that it's a\n>>> performance bottleneck, so maybe we should just change it.\n>\n>> Yes, that might be an option. And it works even on 9.5 for me (per the\n>> attached patch). I don't think it's much less clear compared to just\n>> doing an explicit free at the end.\n>\n>LGTM.\n>\n\nThanks. Do you think this is backpatch-worthy? I'm leaning to yes, but\nmaybe tweaking this just in master is fine. The query is somewhat\nartificial and there are probably ways to rewrite it.\n\nThe thing that annoys me a bit is that this fix is only partial. It gets\nrid of maybe 80% of the allocations, but there's plenty of unnecessary\nstuff left allocated ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 2 Dec 2019 20:22:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: surprisingly expensive join planning query"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Dec 01, 2019 at 02:17:15PM -0500, Tom Lane wrote:\n>> LGTM.\n\n> Thanks. Do you think this is backpatch-worthy? I'm leaning to yes, but\n> maybe tweaking this just in master is fine. The query is somewhat\n> artificial and there are probably ways to rewrite it.\n\nI don't object to back-patching.\n\n> The thing that annoys me a bit is that this fix is only partial. It gets\n> rid of maybe 80% of the allocations, but there's plenty of unnecessary\n> stuff left allocated ...\n\nMeh. I'm not that excited about getting rid of retail space wastage,\nunless there are single dominant points such as you found here. For\nsmall stuff it's far better to worry about memory context management.\n(Speaking of which, I don't quite see why this would have been a problem\nonce you got past geqo_threshold; the context resets that GEQO does\nshould've kept things under control.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 15:21:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: surprisingly expensive join planning query"
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 03:21:47PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Sun, Dec 01, 2019 at 02:17:15PM -0500, Tom Lane wrote:\n>>> LGTM.\n>\n>> Thanks. Do you think this is backpatch-worthy? I'm leaning to yes, but\n>> maybe tweaking this just in master is fine. The query is somewhat\n>> artificial and there are probably ways to rewrite it.\n>\n>I don't object to back-patching.\n>\n\nOK\n\n>> The thing that annoys me a bit is that this fix is only partial. It gets\n>> rid of maybe 80% of the allocations, but there's plenty of unnecessary\n>> stuff left allocated ...\n>\n>Meh. I'm not that excited about getting rid of retail space wastage,\n>unless there are single dominant points such as you found here. For\n>small stuff it's far better to worry about memory context management.\n\nI agree.\n\n>(Speaking of which, I don't quite see why this would have been a problem\n>once you got past geqo_threshold; the context resets that GEQO does\n>should've kept things under control.)\n>\n\nNot sure I follow. geqo_threshold is 12 by default, and the OOM issues\nare hapenning way before that.\n\nIt might be that one reason why this example is so bad is that the CTEs\nhave *exactly* the different join orders are bound to be costed exactly\nthe same I think.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 2 Dec 2019 23:39:33 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: surprisingly expensive join planning query"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> (Speaking of which, I don't quite see why this would have been a problem\n>> once you got past geqo_threshold; the context resets that GEQO does\n>> should've kept things under control.)\n\n> Not sure I follow. geqo_threshold is 12 by default, and the OOM issues\n> are hapenning way before that.\n\nAh, right. But would the peak memory usage keep growing with more than 12\nrels?\n\n> It might be that one reason why this example is so bad is that the CTEs\n> have *exactly* the different join orders are bound to be costed exactly\n> the same I think.\n\nHmm. I didn't really look into exactly why this example is so awful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 17:54:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: surprisingly expensive join planning query"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have run the TAP tests with an instance of Postgres locally set at\nport 5432 on Windows, to notice that 001_start_stop.pl fails various\ntests because the test tries to use the default port for the node\ninitialized with pg_ctl. The problem can get fixed easily by\nassigning a random port number to that instance.\n\nIt could potentially become a problem if parallel TAP tests run in\nparallel on Windows while initializing the node because of a port\nconflict, but that's actually already a problem now for all the tests\nas all nodes listen to 127.0.0.1 in this case. This cannot happen on\n*nix simply because we use a unique unix domain path, so even if ports\nconflict things are able to work. \n\nAttached is a patch to fix this issue, that I would like to\nback-patch down to 9.4 where the issue can show up.\n\nAny objections?\n--\nMichael",
"msg_date": "Mon, 2 Dec 2019 12:14:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Failure in TAP tests of pg_ctl on Windows with parallel instance set"
},
{
"msg_contents": "\nOn 12/1/19 10:14 PM, Michael Paquier wrote:\n> Hi all,\n>\n> I have run the TAP tests with an instance of Postgres locally set at\n> port 5432 on Windows, to notice that 001_start_stop.pl fails various\n> tests because the test tries to use the default port for the node\n> initialized with pg_ctl. The problem can get fixed easily by\n> assigning a random port number to that instance.\n>\n> It could potentially become a problem if parallel TAP tests run in\n> parallel on Windows while initializing the node because of a port\n> conflict, but that's actually already a problem now for all the tests\n> as all nodes listen to 127.0.0.1 in this case. This cannot happen on\n> *nix simply because we use a unique unix domain path, so even if ports\n> conflict things are able to work. \n>\n> Attached is a patch to fix this issue, that I would like to\n> back-patch down to 9.4 where the issue can show up.\n>\n> Any objections?\n\n\n\nLooks reasonable. I wonder if there are other test sets where we need to\nset the port.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 2 Dec 2019 07:57:31 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Failure in TAP tests of pg_ctl on Windows with parallel instance\n set"
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 07:57:31AM -0500, Andrew Dunstan wrote:\n> Looks reasonable.\n\nThanks, committed and back-patched down to 11, which is where we have\nPostgresNode::get_free_port. This could go further down with more\nrefactoring of PostgresNode.pm but as it took a long time to find this\nissue that does not seem really worth the extra legwork.\n\n> I wonder if there are other test sets where we need to set the port.\n\nI looked at that before sending the first email, with the tests of\ninitdb and pg_basebackup potentially breaking stuff, but we never\ninitialize (direct initdb) and then start a node without\nPostgresNode.pm. So we are fine as far as I saw.\n--\nMichael",
"msg_date": "Tue, 3 Dec 2019 13:03:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failure in TAP tests of pg_ctl on Windows with parallel instance\n set"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was just looking at some callers of pg_fsync(), to notice that some\ncode paths don't use data_sync_elevel(). For some code paths, that's\nactually better to never PANIC (say backup_label file, logical\ndecoding snapshot, lock file where FATAL/LOG are used now, etc.).\nHowever I have spotted three code paths where this is not done and I\nthink that's not fine:\n- 2PC file generated at checkpoint time.\n- WAL segment initialization.\n- Temporary state file for a replication slot save, which may cause\nERRORs at checkpoint time.\n\nAny thoughts?\n--\nMichael",
"msg_date": "Mon, 2 Dec 2019 13:58:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Missing data_sync_elevel() for some calls of pg_fsync()?"
},
{
"msg_contents": "On Mon, Dec 2, 2019 at 5:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I was just looking at some callers of pg_fsync(), to notice that some\n> code paths don't use data_sync_elevel(). For some code paths, that's\n> actually better to never PANIC (say backup_label file, logical\n> decoding snapshot, lock file where FATAL/LOG are used now, etc.).\n> However I have spotted three code paths where this is not done and I\n> think that's not fine:\n> - 2PC file generated at checkpoint time.\n> - WAL segment initialization.\n> - Temporary state file for a replication slot save, which may cause\n> ERRORs at checkpoint time.\n\nOne of the distinctions I had in mind when reviewing/working on the\nPANIC stuff was this:\n\n1. Some code opens a file, writes some stuff to it, closes, and then\nfsyncs it, and if that fails and and it ever retries it'll redo all of\nthose steps again. We know that some kernels might have thrown away\nthe data, but we don't care about the copy in the kernel's cache\nbecause we'll write it out again next time around.\n\n2. Some code, primarily buffer pool write-back code, writes data out\nto the file, then throws away the only copy we have of it other than\nthe WAL by using the buffer for some other block, and then later\n(usually in the checkpointer) fsyncs it. One way to look at it is\nthat if the fsync fails, the only place left to get that data (which\nmay represent committed transactions) is the WAL, and the only way to\nget it is crash recovery. Another way to look at it is that if we\ndidn't PANIC, the checkpointer would try again, but it's easily\ndemonstrable that if it tries again, certain kernels will do nothing\nand then tell you that it succeeded, so we need to prevent that or our\ncheckpoint would be a data-eating lie.\n\nI didn't look closely at your patch, but I think those are category 1,\nno? Admittedly there is an argument that we should panic in those\ncases too, because otherwise we're second guessing how the kernel\nworks, and I did make a similar argument for why sync_file_range()\nfailure is panic-worthy.\n\n\n",
"msg_date": "Mon, 2 Dec 2019 18:43:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing data_sync_elevel() for some calls of pg_fsync()?"
},
{
"msg_contents": "On Mon, 2 Dec 2019 at 13:43, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> 1. Some code opens a file, writes some stuff to it, closes, and then\n> fsyncs it, and if that fails and and it ever retries it'll redo all of\n> those steps again. We know that some kernels might have thrown away\n> the data, but we don't care about the copy in the kernel's cache\n> because we'll write it out again next time around.\n>\n\nCan we trust the kernel to be reporting the EIO or ENOSPC only from\nwriteback buffers for the actual file we're fsync()ing though? Not from\nbuffers it flushed while performing our fsync() request, failed to flush,\nand complained about?\n\nI'm not confident I want to assume that.\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Mon, 2 Dec 2019 at 13:43, Thomas Munro <thomas.munro@gmail.com> wrote:\n1. Some code opens a file, writes some stuff to it, closes, and then\nfsyncs it, and if that fails and and it ever retries it'll redo all of\nthose steps again. We know that some kernels might have thrown away\nthe data, but we don't care about the copy in the kernel's cache\nbecause we'll write it out again next time around.Can we trust the kernel to be reporting the EIO or ENOSPC only from writeback buffers for the actual file we're fsync()ing though? Not from buffers it flushed while performing our fsync() request, failed to flush, and complained about?I'm not confident I want to assume that.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 2 Dec 2019 13:45:53 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing data_sync_elevel() for some calls of pg_fsync()?"
}
] |
[
{
"msg_contents": "Hello,\n\nGreg reported in [1] before, autovacuum ignores partitioned tables.\nThat is, even if individual partitions’ statistics are updated, its parent's\nstatistics are not updated. This is TODO for declarative partitioning.\nAs Amit mentioned in [2], a way to make parent's statistics from\npartitions' statistics without scanning the partitions would be nice,\nbut it will need a lot of modifications. So I tried to fix that using the\ncurrent analyze method.\n\nThe summary of the attached patch is as follows:\n* If the relation is a partitioned table, check its children if they need\n vacuum or analyze. Children need to do that are added to\n a table list for autovacuuum. At least one child is added to the list,\n the partitioned table is also added to the list. Then, autovacuum\n runs on all the tables in the list.\n* If the partitioned table has foreign partitions, ignore them.\n\nWhen the parent has children don't need vacuum/analyze or foreign\npartitions, parent's stats are updated scanning the current data of all\nchildren, so old stats and new are mixed within the partition tree.\nIs that suitable? Any thoughts?\n\n[1] https://www.postgresql.org/message-id/CAM-w4HMQKC8hw7nB9TW3OV%2BhkB5OUcPtvr_U_EiSOjByoa-e4Q%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BHiwqEeZQ-H2OVbHZ%3Dn2RNNPF84Hygi1HC-MDwC-VnBjpA1%3DQ%40mail.gmail.com\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Mon, 2 Dec 2019 18:02:42 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Autovacuum on partitioned table"
},
{
"msg_contents": "On Mon, 2019-12-02 at 18:02 +0900, yuzuko wrote:\n> Greg reported in [1] before, autovacuum ignores partitioned tables.\n> That is, even if individual partitions’ statistics are updated, its parent's\n> statistics are not updated. This is TODO for declarative partitioning.\n> As Amit mentioned in [2], a way to make parent's statistics from\n> partitions' statistics without scanning the partitions would be nice,\n> but it will need a lot of modifications. So I tried to fix that using the\n> current analyze method.\n> \n> The summary of the attached patch is as follows:\n> * If the relation is a partitioned table, check its children if they need\n> vacuum or analyze. Children need to do that are added to\n> a table list for autovacuuum. At least one child is added to the list,\n> the partitioned table is also added to the list. Then, autovacuum\n> runs on all the tables in the list.\n\nThat means that all partitions are vacuumed if only one of them needs it,\nright? This will result in way more vacuuming than necessary.\n\nWouldn't it be an option to update the partitioned table's statistics\nwhenever one of the partitions is vacuumed?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 02 Dec 2019 10:19:01 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hi Laurenz,\n\nThanks for the comments.\n\nOn Mon, Dec 2, 2019 at 6:19 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Mon, 2019-12-02 at 18:02 +0900, yuzuko wrote:\n> > Greg reported in [1] before, autovacuum ignores partitioned tables.\n> > That is, even if individual partitions’ statistics are updated, its parent's\n> > statistics are not updated. This is TODO for declarative partitioning.\n> > As Amit mentioned in [2], a way to make parent's statistics from\n> > partitions' statistics without scanning the partitions would be nice,\n> > but it will need a lot of modifications. So I tried to fix that using the\n> > current analyze method.\n> >\n> > The summary of the attached patch is as follows:\n> > * If the relation is a partitioned table, check its children if they need\n> > vacuum or analyze. Children need to do that are added to\n> > a table list for autovacuuum. At least one child is added to the list,\n> > the partitioned table is also added to the list. Then, autovacuum\n> > runs on all the tables in the list.\n>\n> That means that all partitions are vacuumed if only one of them needs it,\n> right? This will result in way more vacuuming than necessary.\n>\nAutovacuum runs only partitions need vacuum/analyze, so unnecessary\npartitions stats are not updated. However, to make parent's stats,\nall children are scanned. It might be a waste of time.\n\n> Wouldn't it be an option to update the partitioned table's statistics\n> whenever one of the partitions is vacuumed?\n>\n> Yours,\n> Laurenz Albe\n>\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 2 Dec 2019 18:42:22 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hi,\n\nAs Laurenz commented in this thread, I tried adding option\nto update parent's statistics during Autovacuum. To do that,\nI propose supporting 'autovacuum_enabled' option already\nexists on partitioned tables.\n\nIn the attached patch, you can use 'autovacuum_enabled' option\non partitioned table as usual, that is, a default value of this option\nis true. So if you don't need autovacuum on a partitioned table,\nyou have to specify the option:\nCREATE TABLE p(i int) partition by range(i) with (autovacuum_enabled=0);\n\nI'm not sure but I wonder if a suitable value as a default of\n'autovacuum_enabled' for partitioned tables might be false.\nBecause autovacuum on *partitioned tables* requires scanning\nall children to make partitioned tables' statistics.\nBut if the default value varies according to the relation,\nis it confusing? Any thoughts?\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Fri, 27 Dec 2019 12:37:15 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Fri, 27 Dec 2019 at 12:37, yuzuko <yuzukohosoya@gmail.com> wrote:\n>\n> Hi,\n>\n> As Laurenz commented in this thread, I tried adding option\n> to update parent's statistics during Autovacuum. To do that,\n> I propose supporting 'autovacuum_enabled' option already\n> exists on partitioned tables.\n>\n> In the attached patch, you can use 'autovacuum_enabled' option\n> on partitioned table as usual, that is, a default value of this option\n> is true. So if you don't need autovacuum on a partitioned table,\n> you have to specify the option:\n> CREATE TABLE p(i int) partition by range(i) with (autovacuum_enabled=0);\n>\n> I'm not sure but I wonder if a suitable value as a default of\n> 'autovacuum_enabled' for partitioned tables might be false.\n> Because autovacuum on *partitioned tables* requires scanning\n> all children to make partitioned tables' statistics.\n> But if the default value varies according to the relation,\n> is it confusing? Any thoughts?\n\nI don't look at the patch deeply yet but your patch seems to attempt\nto vacuum on partitioned table. IIUC partitioned tables don't need to\nbe vacuumed and its all child tables are vacuumed instead if we pass\nthe partitioned table to vacuum() function. But autovacuum on child\ntables is normally triggered since their statistics are updated.\n\nI think it's a good idea to have that option but I think that doing\nautovacuum on the parent table every time when autovacuum is triggered\non one of its child tables is very high cost especially when there are\na lot of child tables. Instead I thought it's more straight forward if\nwe compare the summation of the statistics of child tables (e.g.\nn_live_tuples, n_dead_tuples etc) to vacuum thresholds when we\nconsider the needs of autovacuum on the parent table. What do you\nthink?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 14:01:14 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hello,\n\nOn Fri, Dec 27, 2019 at 2:02 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> On Fri, 27 Dec 2019 at 12:37, yuzuko <yuzukohosoya@gmail.com> wrote:\n> > As Laurenz commented in this thread, I tried adding option\n> > to update parent's statistics during Autovacuum. To do that,\n> > I propose supporting 'autovacuum_enabled' option already\n> > exists on partitioned tables.\n> >\n> > In the attached patch, you can use 'autovacuum_enabled' option\n> > on partitioned table as usual, that is, a default value of this option\n> > is true. So if you don't need autovacuum on a partitioned table,\n> > you have to specify the option:\n> > CREATE TABLE p(i int) partition by range(i) with (autovacuum_enabled=0);\n> >\n> > I'm not sure but I wonder if a suitable value as a default of\n> > 'autovacuum_enabled' for partitioned tables might be false.\n> > Because autovacuum on *partitioned tables* requires scanning\n> > all children to make partitioned tables' statistics.\n> > But if the default value varies according to the relation,\n> > is it confusing? Any thoughts?\n>\n> I don't look at the patch deeply yet but your patch seems to attempt\n> to vacuum on partitioned table. IIUC partitioned tables don't need to\n> be vacuumed and its all child tables are vacuumed instead if we pass\n> the partitioned table to vacuum() function. But autovacuum on child\n> tables is normally triggered since their statistics are updated.\n>\n> I think it's a good idea to have that option but I think that doing\n> autovacuum on the parent table every time when autovacuum is triggered\n> on one of its child tables is very high cost especially when there are\n> a lot of child tables. Instead I thought it's more straight forward if\n> we compare the summation of the statistics of child tables (e.g.\n> n_live_tuples, n_dead_tuples etc) to vacuum thresholds when we\n> consider the needs of autovacuum on the parent table. What do you\n> think?\n\nThere's this old email where Tom outlines a few ideas about triggering\nauto-analyze on inheritance trees:\n\nhttps://www.postgresql.org/message-id/4823.1262132964%40sss.pgh.pa.us\n\nIf I'm reading that correctly, the idea is to track only\nchanges_since_analyze and none of the finer-grained stats like\nlive/dead tuples for inheritance parents (partitioned tables) using\nsome new pgstat infrastrcture, an idea that Hosoya-san also seems to\nbe considering per an off-list discussion. Besides the complexity of\ngetting that infrastructure in place, an important question is whether\nthe current system of applying threshold and scale factor to\nchanges_since_analyze should be used as-is for inheritance parents\n(partitioned tables), because if users set those parameters similarly\nto for regular tables, autovacuum might analyze partitioned tables\nmore than necessary. We'll either need a different formula, or some\ncommentary in the documentation about how partitioned tables might\nneed different setting, or maybe both.\n\nBy the way, maybe I'm misunderstanding what Sawada-san wrote above,\nbut the only missing piece seems to be a way to trigger an *analyze*\non the parent tables -- to collect optimizer statistics for the\ninheritance trees -- not vacuum, for which the existing system seems\nenough.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 28 Jan 2020 17:52:27 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Tue, 28 Jan 2020 at 17:52, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hello,\n>\n> On Fri, Dec 27, 2019 at 2:02 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > On Fri, 27 Dec 2019 at 12:37, yuzuko <yuzukohosoya@gmail.com> wrote:\n> > > As Laurenz commented in this thread, I tried adding option\n> > > to update parent's statistics during Autovacuum. To do that,\n> > > I propose supporting 'autovacuum_enabled' option already\n> > > exists on partitioned tables.\n> > >\n> > > In the attached patch, you can use 'autovacuum_enabled' option\n> > > on partitioned table as usual, that is, a default value of this option\n> > > is true. So if you don't need autovacuum on a partitioned table,\n> > > you have to specify the option:\n> > > CREATE TABLE p(i int) partition by range(i) with (autovacuum_enabled=0);\n> > >\n> > > I'm not sure but I wonder if a suitable value as a default of\n> > > 'autovacuum_enabled' for partitioned tables might be false.\n> > > Because autovacuum on *partitioned tables* requires scanning\n> > > all children to make partitioned tables' statistics.\n> > > But if the default value varies according to the relation,\n> > > is it confusing? Any thoughts?\n> >\n> > I don't look at the patch deeply yet but your patch seems to attempt\n> > to vacuum on partitioned table. IIUC partitioned tables don't need to\n> > be vacuumed and its all child tables are vacuumed instead if we pass\n> > the partitioned table to vacuum() function. But autovacuum on child\n> > tables is normally triggered since their statistics are updated.\n> >\n> > I think it's a good idea to have that option but I think that doing\n> > autovacuum on the parent table every time when autovacuum is triggered\n> > on one of its child tables is very high cost especially when there are\n> > a lot of child tables. Instead I thought it's more straight forward if\n> > we compare the summation of the statistics of child tables (e.g.\n> > n_live_tuples, n_dead_tuples etc) to vacuum thresholds when we\n> > consider the needs of autovacuum on the parent table. What do you\n> > think?\n>\n> There's this old email where Tom outlines a few ideas about triggering\n> auto-analyze on inheritance trees:\n>\n> https://www.postgresql.org/message-id/4823.1262132964%40sss.pgh.pa.us\n>\n> If I'm reading that correctly, the idea is to track only\n> changes_since_analyze and none of the finer-grained stats like\n> live/dead tuples for inheritance parents (partitioned tables) using\n> some new pgstat infrastrcture, an idea that Hosoya-san also seems to\n> be considering per an off-list discussion. Besides the complexity of\n> getting that infrastructure in place, an important question is whether\n> the current system of applying threshold and scale factor to\n> changes_since_analyze should be used as-is for inheritance parents\n> (partitioned tables), because if users set those parameters similarly\n> to for regular tables, autovacuum might analyze partitioned tables\n> more than necessary.\n\nHow are you going to track changes_since_analyze of partitioned table?\nIt's just an idea but we can accumulate changes_since_analyze of\npartitioned table by adding child tables's value after analyzing each\nchild table. And compare the partitioned tables value to the threshold\nthat is computed by (autovacuum_analyze_threshold + total rows\nincluding all child tables * autovacuum_analyze_scale_factor).\n\n> By the way, maybe I'm misunderstanding what Sawada-san wrote above,\n> but the only missing piece seems to be a way to trigger an *analyze*\n> on the parent tables -- to collect optimizer statistics for the\n> inheritance trees -- not vacuum, for which the existing system seems\n> enough.\n\nRight. We need only autoanalyze on partitioned tables.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 28 Jan 2020 20:27:15 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hello,\n\n> Besides the complexity of\n> getting that infrastructure in place, an important question is whether\n> the current system of applying threshold and scale factor to\n> changes_since_analyze should be used as-is for inheritance parents\n> (partitioned tables), because if users set those parameters similarly\n> to for regular tables, autovacuum might analyze partitioned tables\n> more than necessary. We'll either need a different formula, or some\n> commentary in the documentation about how partitioned tables might\n> need different setting, or maybe both.\n>\nI'm not sure but I think we need new autovacuum parameters for\npartitioned tables (autovacuum, autovacuum_analyze_threshold,\nautovacuum_analyze_scale_factor) because whether it's necessary\nto run autovacuum on partitioned tables will depend on users.\nWhat do you think?\n\n> How are you going to track changes_since_analyze of partitioned table?\n> It's just an idea but we can accumulate changes_since_analyze of\n> partitioned table by adding child tables's value after analyzing each\n> child table. And compare the partitioned tables value to the threshold\n> that is computed by (autovacuum_analyze_threshold + total rows\n> including all child tables * autovacuum_analyze_scale_factor).\n>\nThe idea Sawada-san mentioned is similar to mine. Also, for tracking\nchanges_since_analyze, we have to make partitioned table's statistics.\nTo do that, we can invent a new PgStat_StatPartitionedTabEntry based\non PgStat_StatTabEntry. Through talking with Amit, I think the new structure\nneeds the following members:\n\ntableid\nchanges_since_analyze\nanalyze_timestamp\nanalyze_count\nautovac_analyze_timestamp\nautovac_analyze_count\n\nVacuum doesn't run on partitioned tables, so I think members related to\n(auto) vacuum need not be contained in the structure.\n\nI'm still writing a patch. I'll send it this week.\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 29 Jan 2020 11:29:35 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 11:29 AM yuzuko <yuzukohosoya@gmail.com> wrote:\n> > Besides the complexity of\n> > getting that infrastructure in place, an important question is whether\n> > the current system of applying threshold and scale factor to\n> > changes_since_analyze should be used as-is for inheritance parents\n> > (partitioned tables), because if users set those parameters similarly\n> > to for regular tables, autovacuum might analyze partitioned tables\n> > more than necessary. We'll either need a different formula, or some\n> > commentary in the documentation about how partitioned tables might\n> > need different setting, or maybe both.\n> >\n> I'm not sure but I think we need new autovacuum parameters for\n> partitioned tables (autovacuum, autovacuum_analyze_threshold,\n> autovacuum_analyze_scale_factor) because whether it's necessary\n> to run autovacuum on partitioned tables will depend on users.\n> What do you think?\n\nYes, we will need to first support those parameters on partitioned\ntables. Currently, you get:\n\ncreate table p (a int) partition by list (a) with\n(autovacuum_analyze_scale_factor=0);\nERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\"\n\n> > How are you going to track changes_since_analyze of partitioned table?\n> > It's just an idea but we can accumulate changes_since_analyze of\n> > partitioned table by adding child tables's value after analyzing each\n> > child table. And compare the partitioned tables value to the threshold\n> > that is computed by (autovacuum_analyze_threshold + total rows\n> > including all child tables * autovacuum_analyze_scale_factor).\n> >\n> The idea Sawada-san mentioned is similar to mine.\n\nSo if I understand this idea correctly, a partitioned table's analyze\nwill only be triggered when partitions are analyzed. That is,\ninserts, updates, deletes of tuples in partitions will be tracked by\npgstat, which in turn is used by autovacuum to trigger analyze on\npartitions. Then, partitions changes_since_analyze is added into the\nparent's changes_since_analyze, which in turn *may* trigger analyze\nparent. I said \"may\", because it would take multiple partition\nanalyzes to accumulate enough changes to trigger one on the parent.\nAm I getting that right?\n\n> Also, for tracking\n> changes_since_analyze, we have to make partitioned table's statistics.\n> To do that, we can invent a new PgStat_StatPartitionedTabEntry based\n> on PgStat_StatTabEntry. Through talking with Amit, I think the new structure\n> needs the following members:\n>\n> tableid\n> changes_since_analyze\n> analyze_timestamp\n> analyze_count\n> autovac_analyze_timestamp\n> autovac_analyze_count\n>\n> Vacuum doesn't run on partitioned tables, so I think members related to\n> (auto) vacuum need not be contained in the structure.\n\nOn second thought, maybe we don't need a new PgStat_ struct. We can\njust use what's used for regular tables and leave the fields that\ndon't make sense for partitioned tables set to 0, such as those that\ntrack the counts of scans, tuples, etc. That means we don't have to\nmess with interfaces of existing functions, like this one:\n\nstatic void relation_needs_vacanalyze(Oid relid,\n AutoVacOpts *relopts,\n Form_pg_class classForm,\n PgStat_StatTabEntry *tabentry, ...\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 29 Jan 2020 17:56:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 05:56:40PM +0900, Amit Langote wrote:\n> Yes, we will need to first support those parameters on partitioned\n> tables. Currently, you get:\n> \n> create table p (a int) partition by list (a) with\n> (autovacuum_analyze_scale_factor=0);\n> ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\"\n\nWorth the note: partitioned tables support zero reloptions as of now,\nbut there is the facility in place to allow that (see\nRELOPT_KIND_PARTITIONED and partitioned_table_reloptions).\n--\nMichael",
"msg_date": "Wed, 29 Jan 2020 20:38:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Wed, 29 Jan 2020 at 17:56, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Jan 29, 2020 at 11:29 AM yuzuko <yuzukohosoya@gmail.com> wrote:\n> > > Besides the complexity of\n> > > getting that infrastructure in place, an important question is whether\n> > > the current system of applying threshold and scale factor to\n> > > changes_since_analyze should be used as-is for inheritance parents\n> > > (partitioned tables), because if users set those parameters similarly\n> > > to for regular tables, autovacuum might analyze partitioned tables\n> > > more than necessary. We'll either need a different formula, or some\n> > > commentary in the documentation about how partitioned tables might\n> > > need different setting, or maybe both.\n> > >\n> > I'm not sure but I think we need new autovacuum parameters for\n> > partitioned tables (autovacuum, autovacuum_analyze_threshold,\n> > autovacuum_analyze_scale_factor) because whether it's necessary\n> > to run autovacuum on partitioned tables will depend on users.\n> > What do you think?\n>\n> Yes, we will need to first support those parameters on partitioned\n> tables. Currently, you get:\n>\n> create table p (a int) partition by list (a) with\n> (autovacuum_analyze_scale_factor=0);\n> ERROR: unrecognized parameter \"autovacuum_analyze_scale_factor\"\n>\n> > > How are you going to track changes_since_analyze of partitioned table?\n> > > It's just an idea but we can accumulate changes_since_analyze of\n> > > partitioned table by adding child tables's value after analyzing each\n> > > child table. And compare the partitioned tables value to the threshold\n> > > that is computed by (autovacuum_analyze_threshold + total rows\n> > > including all child tables * autovacuum_analyze_scale_factor).\n> > >\n> > The idea Sawada-san mentioned is similar to mine.\n>\n> So if I understand this idea correctly, a partitioned table's analyze\n> will only be triggered when partitions are analyzed. That is,\n> inserts, updates, deletes of tuples in partitions will be tracked by\n> pgstat, which in turn is used by autovacuum to trigger analyze on\n> partitions. Then, partitions changes_since_analyze is added into the\n> parent's changes_since_analyze, which in turn *may* trigger analyze\n> parent. I said \"may\", because it would take multiple partition\n> analyzes to accumulate enough changes to trigger one on the parent.\n> Am I getting that right?\n\nYeah that is what I meant. In addition, adding partition's\nchanges_since_analyze to its parent needs to be done recursively as\nthe parent table could also be a partitioned table.\n\n>\n> > Also, for tracking\n> > changes_since_analyze, we have to make partitioned table's statistics.\n> > To do that, we can invent a new PgStat_StatPartitionedTabEntry based\n> > on PgStat_StatTabEntry. Through talking with Amit, I think the new structure\n> > needs the following members:\n> >\n> > tableid\n> > changes_since_analyze\n> > analyze_timestamp\n> > analyze_count\n> > autovac_analyze_timestamp\n> > autovac_analyze_count\n> >\n> > Vacuum doesn't run on partitioned tables, so I think members related to\n> > (auto) vacuum need not be contained in the structure.\n>\n> On second thought, maybe we don't need a new PgStat_ struct. We can\n> just use what's used for regular tables and leave the fields that\n> don't make sense for partitioned tables set to 0, such as those that\n> track the counts of scans, tuples, etc. That means we don't have to\n> mess with interfaces of existing functions, like this one:\n>\n> static void relation_needs_vacanalyze(Oid relid,\n> AutoVacOpts *relopts,\n> Form_pg_class classForm,\n> PgStat_StatTabEntry *tabentry, ...\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 2 Feb 2020 12:52:53 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Sun, Feb 2, 2020 at 12:53 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> On Wed, 29 Jan 2020 at 17:56, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Jan 29, 2020 at 11:29 AM yuzuko <yuzukohosoya@gmail.com> wrote:\n> > > > How are you going to track changes_since_analyze of partitioned table?\n> > > > It's just an idea but we can accumulate changes_since_analyze of\n> > > > partitioned table by adding child tables's value after analyzing each\n> > > > child table. And compare the partitioned tables value to the threshold\n> > > > that is computed by (autovacuum_analyze_threshold + total rows\n> > > > including all child tables * autovacuum_analyze_scale_factor).\n> > > >\n> > > The idea Sawada-san mentioned is similar to mine.\n> >\n> > So if I understand this idea correctly, a partitioned table's analyze\n> > will only be triggered when partitions are analyzed. That is,\n> > inserts, updates, deletes of tuples in partitions will be tracked by\n> > pgstat, which in turn is used by autovacuum to trigger analyze on\n> > partitions. Then, partitions changes_since_analyze is added into the\n> > parent's changes_since_analyze, which in turn *may* trigger analyze\n> > parent. I said \"may\", because it would take multiple partition\n> > analyzes to accumulate enough changes to trigger one on the parent.\n> > Am I getting that right?\n>\n> Yeah that is what I meant. In addition, adding partition's\n> changes_since_analyze to its parent needs to be done recursively as\n> the parent table could also be a partitioned table.\n\nThat's a good point. So, changes_since_analyze increments are\nessentially propagated from leaf partitions to all the way up to the\nroot table, including any intermediate partitioned tables. We'll need\nto consider whether we should propagate only one level at a time (from\nbottom of the tree) or update all parents up to the root, every time a\nleaf partition is analyzed. If we we do the latter, that might end up\ntriggering analyze on all the parents at the same time causing\nrepeated scanning of the same child tables in close intervals,\nalthough setting analyze threshold and scale factor of the parent\ntables of respective levels wisely can help avoid any negative impact\nof that.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 3 Feb 2020 11:36:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hello,\n\nI'm sorry for the delay.\nAttach the latest patch based on discussion in this thread.\n\n> > Yeah that is what I meant. In addition, adding partition's\n> > changes_since_analyze to its parent needs to be done recursively as\n> > the parent table could also be a partitioned table.\n>\n> That's a good point. So, changes_since_analyze increments are\n> essentially propagated from leaf partitions to all the way up to the\n> root table, including any intermediate partitioned tables. We'll need\n> to consider whether we should propagate only one level at a time (from\n> bottom of the tree) or update all parents up to the root, every time a\n> leaf partition is analyzed.\n\nFor multi-level partitioning, all parents' changes_since_analyze will be\nupdated whenever analyzing a leaf partition in this patch.\nCould you please check the patch again?\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Thu, 20 Feb 2020 15:33:49 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hosoya-san,\n\nOn Thu, Feb 20, 2020 at 3:34 PM yuzuko <yuzukohosoya@gmail.com> wrote:\n> Attach the latest patch based on discussion in this thread.\n>\n> > > Yeah that is what I meant. In addition, adding partition's\n> > > changes_since_analyze to its parent needs to be done recursively as\n> > > the parent table could also be a partitioned table.\n> >\n> > That's a good point. So, changes_since_analyze increments are\n> > essentially propagated from leaf partitions to all the way up to the\n> > root table, including any intermediate partitioned tables. We'll need\n> > to consider whether we should propagate only one level at a time (from\n> > bottom of the tree) or update all parents up to the root, every time a\n> > leaf partition is analyzed.\n>\n> For multi-level partitioning, all parents' changes_since_analyze will be\n> updated whenever analyzing a leaf partition in this patch.\n> Could you please check the patch again?\n\nThank you for the new patch.\n\nI built and confirmed that the patch works.\n\nHere are some comments:\n\n* White-space noise in the diff (space used where tab is expected);\nplease check with git diff --check and fix.\n\n* Names changes_tuples, m_changes_tuples should be changed_tuples and\nm_changed_tuples, respectively?\n\n* Did you intend to make it so that we now report *all* inherited\nstats to the stats collector, not just those for partitioned tables?\nIOW, do did you intend the new feature to also cover traditional\ninheritance parents? I am talking about the following diff:\n\n /*\n- * Report ANALYZE to the stats collector, too. However, if doing\n- * inherited stats we shouldn't report, because the stats collector only\n- * tracks per-table stats. Reset the changes_since_analyze counter only\n- * if we analyzed all columns; otherwise, there is still work for\n- * auto-analyze to do.\n+ * Report ANALYZE to the stats collector, too. If the table is a\n+ * partition, report changes_since_analyze of its parent because\n+ * autovacuum process for partitioned tables needs it. Reset the\n+ * changes_since_analyze counter only if we analyzed all columns;\n+ * otherwise, there is still work for auto-analyze to do.\n */\n- if (!inh)\n- pgstat_report_analyze(onerel, totalrows, totaldeadrows,\n- (va_cols == NIL));\n+ pgstat_report_analyze(onerel, totalrows, totaldeadrows,\n+ (va_cols == NIL));\n\n* I may be missing something, but why doesn't do_autovacuum() fetch a\npartitioned table's entry from pgstat instead of fetching that for\nindividual children and adding? That is, why do we need to do the\nfollowing:\n\n+ /*\n+ * If the relation is a partitioned table, we check it\nusing reltuples\n+ * added up childrens' and changes_since_analyze tracked\nby stats collector.\n\n\nMore later...\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 20 Feb 2020 16:50:50 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Thu, Feb 20, 2020 at 4:50 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> * I may be missing something, but why doesn't do_autovacuum() fetch a\n> partitioned table's entry from pgstat instead of fetching that for\n> individual children and adding? That is, why do we need to do the\n> following:\n>\n> + /*\n> + * If the relation is a partitioned table, we check it\n> using reltuples\n> + * added up childrens' and changes_since_analyze tracked\n> by stats collector.\n\nOh, it's only adding up children's pg_class.reltuple, not pgstat\nstats. We need to do that because a partitioned table's\npg_class.reltuples is always 0 and correctly so. Sorry for not\nreading the patch properly.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 20 Feb 2020 17:32:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Thu, Feb 20, 2020 at 5:32 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Feb 20, 2020 at 4:50 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > * I may be missing something, but why doesn't do_autovacuum() fetch a\n> > partitioned table's entry from pgstat instead of fetching that for\n> > individual children and adding? That is, why do we need to do the\n> > following:\n> >\n> > + /*\n> > + * If the relation is a partitioned table, we check it\n> > using reltuples\n> > + * added up childrens' and changes_since_analyze tracked\n> > by stats collector.\n>\n> Oh, it's only adding up children's pg_class.reltuple, not pgstat\n> stats. We need to do that because a partitioned table's\n> pg_class.reltuples is always 0 and correctly so. Sorry for not\n> reading the patch properly.\n\nHaving read the relevant diffs again, I think this could be done\nwithout duplicating code too much. You seem to have added the same\nlogic in two places: do_autovacuum() and table_recheck_autovac().\nMore importantly, part of the logic of relation_needs_vacanalyze() is\nduplicated in both of the aforementioned places, which I think is\nunnecessary and undesirable if you consider maintainability. I think\nwe could just add the logic to compute reltuples for partitioned\ntables at the beginning of relation_needs_vacanalyze() and be done. I\nhave attached a delta patch to show what I mean. Please check and\ntell what you think.\n\nThanks,\nAmit",
"msg_date": "Thu, 20 Feb 2020 18:29:37 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hello Amit-san,\n\nThanks for your comments.\n\n> * White-space noise in the diff (space used where tab is expected);\n> please check with git diff --check and fix.\nFixed it.\n\n> * Names changes_tuples, m_changes_tuples should be changed_tuples and\n> m_changed_tuples, respectively?\nYes, I modified it.\n\n> * Did you intend to make it so that we now report *all* inherited\n> stats to the stats collector, not just those for partitioned tables?\n> IOW, do did you intend the new feature to also cover traditional\n> inheritance parents? I am talking about the following diff:\n>\nI modified as follows to apply this feature to only declaretive partitioning.\n\n- if (!inh)\n- pgstat_report_analyze(onerel, totalrows, totaldeadrows,\n- (va_cols == NIL));\n+ if (!inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+ pgstat_report_analyze(onerel, totalrows, totaldeadrows,\n+ (va_cols == NIL));\n\n\n> Having read the relevant diffs again, I think this could be done\n> without duplicating code too much. You seem to have added the same\n> logic in two places: do_autovacuum() and table_recheck_autovac().\n> More importantly, part of the logic of relation_needs_vacanalyze() is\n> duplicated in both of the aforementioned places, which I think is\n> unnecessary and undesirable if you consider maintainability. I think\n> we could just add the logic to compute reltuples for partitioned\n> tables at the beginning of relation_needs_vacanalyze() and be done.\n>\nYes, indeed. Partitioned tables don't need to vacuum so I added new\nchecking process for partitioned tables outside relation_needs_vacanalyze().\nHowever, partitioned tables' tabentry->n_dead_tuples are always 0 so\ndovacuum is always false. So I think that checking both auto vacuum\nand analyze for partitioned tables doesn't matter. I merged v3_amit_delta.patch\ninto the new patch and found minor bug, partitioned table's reltuples is\noverwritten with it's classForm->reltuples, so I fixed it.\n\nAlso, I think partitioned tables' changes_since_analyze should be reported\nonly when Autovacuum process. So I fixed it too.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Fri, 21 Feb 2020 15:14:05 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Fri, 21 Feb 2020 at 15:14, yuzuko <yuzukohosoya@gmail.com> wrote:\n>\n> Hello Amit-san,\n>\n> Thanks for your comments.\n>\n> > * White-space noise in the diff (space used where tab is expected);\n> > please check with git diff --check and fix.\n> Fixed it.\n>\n> > * Names changes_tuples, m_changes_tuples should be changed_tuples and\n> > m_changed_tuples, respectively?\n> Yes, I modified it.\n>\n> > * Did you intend to make it so that we now report *all* inherited\n> > stats to the stats collector, not just those for partitioned tables?\n> > IOW, do did you intend the new feature to also cover traditional\n> > inheritance parents? I am talking about the following diff:\n> >\n> I modified as follows to apply this feature to only declaretive partitioning.\n>\n> - if (!inh)\n> - pgstat_report_analyze(onerel, totalrows, totaldeadrows,\n> - (va_cols == NIL));\n> + if (!inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> + pgstat_report_analyze(onerel, totalrows, totaldeadrows,\n> + (va_cols == NIL));\n>\n>\n> > Having read the relevant diffs again, I think this could be done\n> > without duplicating code too much. You seem to have added the same\n> > logic in two places: do_autovacuum() and table_recheck_autovac().\n> > More importantly, part of the logic of relation_needs_vacanalyze() is\n> > duplicated in both of the aforementioned places, which I think is\n> > unnecessary and undesirable if you consider maintainability. I think\n> > we could just add the logic to compute reltuples for partitioned\n> > tables at the beginning of relation_needs_vacanalyze() and be done.\n> >\n> Yes, indeed. Partitioned tables don't need to vacuum so I added new\n> checking process for partitioned tables outside relation_needs_vacanalyze().\n> However, partitioned tables' tabentry->n_dead_tuples are always 0 so\n> dovacuum is always false. So I think that checking both auto vacuum\n> and analyze for partitioned tables doesn't matter. I merged v3_amit_delta.patch\n> into the new patch and found minor bug, partitioned table's reltuples is\n> overwritten with it's classForm->reltuples, so I fixed it.\n>\n> Also, I think partitioned tables' changes_since_analyze should be reported\n> only when Autovacuum process. So I fixed it too.\n\nThank you for updating the patch. I tested v4 patch.\n\nAfter analyze or autoanalyze on partitioned table n_live_tup and\nn_dead_tup are updated. However, TRUNCATE and VACUUM on the\npartitioned table don't change these values until invoking analyze or\nautoanalyze whereas in normal tables these values are reset or\nchanged. For example, with your patch:\n\n* Before\n relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n---------+------------+------------+---------------------\n c1 | 11 | 0 | 0\n c2 | 11 | 0 | 0\n c3 | 11 | 0 | 0\n c4 | 11 | 0 | 0\n c5 | 11 | 0 | 0\n parent | 55 | 0 | 0\n(6 rows)\n\n* After 'TRUNCATE parent'\n relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n---------+------------+------------+---------------------\n c1 | 0 | 0 | 0\n c2 | 0 | 0 | 0\n c3 | 0 | 0 | 0\n c4 | 0 | 0 | 0\n c5 | 0 | 0 | 0\n parent | 55 | 0 | 0\n(6 rows)\n\n* Before\n relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n---------+------------+------------+---------------------\n c1 | 0 | 11 | 0\n c2 | 0 | 11 | 0\n c3 | 0 | 11 | 0\n c4 | 0 | 11 | 0\n c5 | 0 | 11 | 0\n parent | 0 | 55 | 0\n(6 rows)\n\n* After 'VACUUM parent'\n relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n---------+------------+------------+---------------------\n c1 | 0 | 0 | 0\n c2 | 0 | 0 | 0\n c3 | 0 | 0 | 0\n c4 | 0 | 0 | 0\n c5 | 0 | 0 | 0\n parent | 0 | 55 | 0\n(6 rows)\n\nWe can make it work correctly but I think perhaps we can skip updating\nstatistics values of partitioned tables other than n_mod_since_analyze\nas the first step. Because if we support also n_live_tup and\nn_dead_tup, user might get confused that other statistics values such\nas seq_scan, seq_tup_read however are not supported.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 21 Feb 2020 16:47:06 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Fri, Feb 21, 2020 at 4:47 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> Thank you for updating the patch. I tested v4 patch.\n>\n> After analyze or autoanalyze on partitioned table n_live_tup and\n> n_dead_tup are updated. However, TRUNCATE and VACUUM on the\n> partitioned table don't change these values until invoking analyze or\n> autoanalyze whereas in normal tables these values are reset or\n> changed. For example, with your patch:\n>\n> * Before\n> relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n> ---------+------------+------------+---------------------\n> c1 | 11 | 0 | 0\n> c2 | 11 | 0 | 0\n> c3 | 11 | 0 | 0\n> c4 | 11 | 0 | 0\n> c5 | 11 | 0 | 0\n> parent | 55 | 0 | 0\n> (6 rows)\n>\n> * After 'TRUNCATE parent'\n> relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n> ---------+------------+------------+---------------------\n> c1 | 0 | 0 | 0\n> c2 | 0 | 0 | 0\n> c3 | 0 | 0 | 0\n> c4 | 0 | 0 | 0\n> c5 | 0 | 0 | 0\n> parent | 55 | 0 | 0\n> (6 rows)\n>\n> * Before\n> relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n> ---------+------------+------------+---------------------\n> c1 | 0 | 11 | 0\n> c2 | 0 | 11 | 0\n> c3 | 0 | 11 | 0\n> c4 | 0 | 11 | 0\n> c5 | 0 | 11 | 0\n> parent | 0 | 55 | 0\n> (6 rows)\n>\n> * After 'VACUUM parent'\n> relname | n_live_tup | n_dead_tup | n_mod_since_analyze\n> ---------+------------+------------+---------------------\n> c1 | 0 | 0 | 0\n> c2 | 0 | 0 | 0\n> c3 | 0 | 0 | 0\n> c4 | 0 | 0 | 0\n> c5 | 0 | 0 | 0\n> parent | 0 | 55 | 0\n> (6 rows)\n>\n> We can make it work correctly but I think perhaps we can skip updating\n> statistics values of partitioned tables other than n_mod_since_analyze\n> as the first step. Because if we support also n_live_tup and\n> n_dead_tup, user might get confused that other statistics values such\n> as seq_scan, seq_tup_read however are not supported.\n\n+1, that makes sense.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 21 Feb 2020 17:35:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hi,\n\nThanks for reviewing the patch.\n\n> > We can make it work correctly but I think perhaps we can skip updating\n> > statistics values of partitioned tables other than n_mod_since_analyze\n> > as the first step. Because if we support also n_live_tup and\n> > n_dead_tup, user might get confused that other statistics values such\n> > as seq_scan, seq_tup_read however are not supported.\n>\n> +1, that makes sense.\n>\nYes, Indeed. I modified it not to update statistics other than\nn_mod_since_analyze.\nAttach the v5 patch. In this patch, pgstat_report_analyze() always reports 0 as\nmsg.m_live_tuples and m_dead_tuples when the relation is partitioned.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Wed, 26 Feb 2020 11:32:49 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Wed, 26 Feb 2020 at 11:33, yuzuko <yuzukohosoya@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for reviewing the patch.\n>\n> > > We can make it work correctly but I think perhaps we can skip updating\n> > > statistics values of partitioned tables other than n_mod_since_analyze\n> > > as the first step. Because if we support also n_live_tup and\n> > > n_dead_tup, user might get confused that other statistics values such\n> > > as seq_scan, seq_tup_read however are not supported.\n> >\n> > +1, that makes sense.\n> >\n> Yes, Indeed. I modified it not to update statistics other than\n> n_mod_since_analyze.\n> Attach the v5 patch. In this patch, pgstat_report_analyze() always reports 0 as\n> msg.m_live_tuples and m_dead_tuples when the relation is partitioned.\n>\n\nThank you for updating the patch. I'll look at it. I'd recommend to\nregister this patch to the next commit fest so at not to forget.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Feb 2020 11:02:36 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hello Yuzuko,\n\n> +\t * Report ANALYZE to the stats collector, too. If the table is a\n> +\t * partition, report changes_since_analyze of its parent because\n> +\t * autovacuum process for partitioned tables needs it. Reset the\n> +\t * changes_since_analyze counter only if we analyzed all columns;\n> +\t * otherwise, there is still work for auto-analyze to do.\n> \t */\n> -\tif (!inh)\n> -\t\tpgstat_report_analyze(onerel, totalrows, totaldeadrows,\n> -\t\t\t\t\t\t\t (va_cols == NIL));\n> +\tif (!inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> +\tpgstat_report_analyze(onerel, totalrows, totaldeadrows,\n> +\t\t\t\t\t\t (va_cols == NIL));\n \nHmm, I think the comment has a bug: it says \"report ... of its parent\"\nbut the report is of the same rel. (The pgstat_report_analyze line is\nmis-indented also).\n\n\n> \t/*\n> +\t * If the relation is a partitioned table, we must add up children's\n> +\t * reltuples.\n> +\t */\n> +\tif (classForm->relkind == RELKIND_PARTITIONED_TABLE)\n> +\t{\n> +\t\tList *children;\n> +\t\tListCell *lc;\n> +\n> +\t\treltuples = 0;\n> +\n> +\t\t/* Find all members of inheritance set taking AccessShareLock */\n> +\t\tchildren = find_all_inheritors(relid, AccessShareLock, NULL);\n> +\n> +\t\tforeach(lc, children)\n> +\t\t{\n> +\t\t\tOid childOID = lfirst_oid(lc);\n> +\t\t\tHeapTuple childtuple;\n> +\t\t\tForm_pg_class childclass;\n> +\n> +\t\t\t/* Ignore the parent table */\n> +\t\t\tif (childOID == relid)\n> +\t\t\t\tcontinue;\n\nI think this loop counts partitioned partitions multiple times, because\nwe add up reltuples for all levels, no? (If I'm wrong, that is, if\na partitioned rel does not have reltuples, then why skip the parent?)\n \n> +\t/*\n> +\t * If the table is a leaf partition, tell the stats collector its parent's\n> +\t * changes_since_analyze for auto analyze\n> +\t */\n> +\tif (IsAutoVacuumWorkerProcess() &&\n> +\t\trel->rd_rel->relispartition &&\n> +\t\t!(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n\nI'm not sure I understand why we do this only on autovac. Why not all\nanalyzes?\n\n> +\t{\n> +\t\tOid parentoid;\n> +\t\tRelation parentrel;\n> +\t\tPgStat_StatDBEntry *dbentry;\n> +\t\tPgStat_StatTabEntry *tabentry;\n> +\n> +\t\t/* Get its parent table's Oid and relation */\n> +\t\tparentoid = get_partition_parent(RelationGetRelid(rel));\n> +\t\tparentrel = table_open(parentoid, AccessShareLock);\n\nClimbing up the partitioning hierarchy acquiring locks on ancestor\nrelations opens up for deadlocks. It's better to avoid that. (As a\ntest, you could try what happens if you lock the topmost relation with\naccess-exclusive and leave a transaction open, then have autoanalyze\nrun). At the same time, I wonder if it's sensible to move one level up\nhere, and also have pgstat_report_partanalyze move more levels up.\n\n> + * pgstat_report_partanalyze() -\n> + *\n> + *\tTell the collector about the parent table of which partition just analyzed.\n> + *\n> + * Caller must provide a child's changes_since_analyze as a parents.\n\nI'm not sure what the last line is trying to say.\n\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Feb 2020 23:25:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hosoya-san,\n\nThanks for the new patch.\n\nOn Wed, Feb 26, 2020 at 11:33 AM yuzuko <yuzukohosoya@gmail.com> wrote:\n> Attach the v5 patch. In this patch, pgstat_report_analyze() always reports 0 as\n> msg.m_live_tuples and m_dead_tuples when the relation is partitioned.\n\nSome comments:\n\n+ * PgStat_MsgPartAnalyze Sent by the backend or autovacuum daemon\n+ * after ANALYZE for partitioned tables\n\nLooking at the way this message is used, it does not seem to be an\n\"analyze\" message and also it's not sent \"after ANALYZE of partitioned\ntables\", but really after ANALYZE of leaf partitions. Analyze (for\nboth partitioned tables and leaf partitions) is reported as a\nPgStat_MsgAnalyze message as before. It seems that\nPgStat_MsgPartAnalyze is only sent to update a leaf partition's\nparent's (and recursively any grandparents') changes_since_analyze\ncounters, so maybe we should find a different name for it. Maybe,\nPgStat_MsgPartChanges and accordingly the message type enum value.\n\n /*\n- * Report ANALYZE to the stats collector, too. However, if doing\n- * inherited stats we shouldn't report, because the stats collector only\n- * tracks per-table stats. Reset the changes_since_analyze counter only\n- * if we analyzed all columns; otherwise, there is still work for\n- * auto-analyze to do.\n+ * Report ANALYZE to the stats collector, too. If the table is a\n+ * partition, report changes_since_analyze of its parent because\n+ * autovacuum process for partitioned tables needs it. Reset the\n+ * changes_since_analyze counter only if we analyzed all columns;\n+ * otherwise, there is still work for auto-analyze to do.\n */\n\nThe new comment says \"partitions\", which we typically use to refer to\na child table, but this comment really talks about parent tables. Old\ncomment says we don't report \"inherited stats\", presumably because\nstats collector lacks the infrastructure to distinguish a table's\ninherited stats and own stats, at least in the case of traditional\ninheritance. With this patch, we are making an exception for\npartitioned tables, because we are also teaching the stats collector\nto maintain at least changes_since_analyze for them that accumulates\ncounts of changed tuples from partitions.\n\nIt seems Alvaro already reported some of the other issues I had with\nthe patch, such as why partanalyze messages are only sent from a\nautovacuum worker.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 28 Feb 2020 11:31:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On Fri, Feb 28, 2020 at 11:25 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> > /*\n> > + * If the relation is a partitioned table, we must add up children's\n> > + * reltuples.\n> > + */\n> > + if (classForm->relkind == RELKIND_PARTITIONED_TABLE)\n> > + {\n> > + List *children;\n> > + ListCell *lc;\n> > +\n> > + reltuples = 0;\n> > +\n> > + /* Find all members of inheritance set taking AccessShareLock */\n> > + children = find_all_inheritors(relid, AccessShareLock, NULL);\n> > +\n> > + foreach(lc, children)\n> > + {\n> > + Oid childOID = lfirst_oid(lc);\n> > + HeapTuple childtuple;\n> > + Form_pg_class childclass;\n> > +\n> > + /* Ignore the parent table */\n> > + if (childOID == relid)\n> > + continue;\n>\n> I think this loop counts partitioned partitions multiple times, because\n> we add up reltuples for all levels, no? (If I'm wrong, that is, if\n> a partitioned rel does not have reltuples, then why skip the parent?)\n\n+1, no need to skip partitioned tables here a their reltuples is always 0.\n\n> > + /*\n> > + * If the table is a leaf partition, tell the stats collector its parent's\n> > + * changes_since_analyze for auto analyze\n\nMaybe write:\n\nFor a leaf partition, add its current changes_since_analyze into its\nancestors' counts. This must be done before sending the ANALYZE\nmessage as it resets the partition's changes_since_analyze counter.\n\n> > + */\n> > + if (IsAutoVacuumWorkerProcess() &&\n> > + rel->rd_rel->relispartition &&\n> > + !(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n>\n> I'm not sure I understand why we do this only on autovac. Why not all\n> analyzes?\n\n+1. If there is a reason, it should at least be documented in the\ncomment above.\n\n> > + {\n> > + Oid parentoid;\n> > + Relation parentrel;\n> > + PgStat_StatDBEntry *dbentry;\n> > + PgStat_StatTabEntry *tabentry;\n> > +\n> > + /* Get its parent table's Oid and relation */\n> > + parentoid = get_partition_parent(RelationGetRelid(rel));\n> > + parentrel = table_open(parentoid, AccessShareLock);\n>\n> Climbing up the partitioning hierarchy acquiring locks on ancestor\n> relations opens up for deadlocks. It's better to avoid that. (As a\n> test, you could try what happens if you lock the topmost relation with\n> access-exclusive and leave a transaction open, then have autoanalyze\n> run). At the same time, I wonder if it's sensible to move one level up\n> here, and also have pgstat_report_partanalyze move more levels up.\n\nMaybe fetch all ancestors here and process from the top. But as we'd\nhave locked the leaf partition long before we got here, maybe we\nshould lock ancestors even before we start analyzing the leaf\npartition? AccessShareLock should be enough on the ancestors because\nwe're not actually analyzing them.\n\n(It appears get_partition_ancestors() returns a list where the root\nparent is the last element, so need to be careful with that.)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 28 Feb 2020 12:57:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hello,\n\nThank you for reviewing.\n\n> > > + */\n> > > + if (IsAutoVacuumWorkerProcess() &&\n> > > + rel->rd_rel->relispartition &&\n> > > + !(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n> >\n> > I'm not sure I understand why we do this only on autovac. Why not all\n> > analyzes?\n>\n> +1. If there is a reason, it should at least be documented in the\n> comment above.\n>\nWhen we analyze partitioned table by ANALYZE command,\nall inheritors including partitioned table are analyzed\nat the same time. In this case, if we call pgstat_report_partanalyze,\npartitioned table's changes_since_analyze is updated\naccording to the number of analyzed tuples of partitions\nas follows. But I think it should be 0.\n\n\\d+ p\n Partitioned table \"public.p\"\n Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n i | integer | | | | plain | |\nPartition key: RANGE (i)\nPartitions: p_1 FOR VALUES FROM (0) TO (100),\n p_2 FOR VALUES FROM (100) TO (200)\n\ninsert into p select * from generate_series(0,199);\nINSERT 0 200\n\n(before analyze)\n-[ RECORD 1 ]-------+------------------\nrelname | p\nn_mod_since_analyze | 0\n-[ RECORD 2 ]-------+------------------\nrelname | p_1\nn_mod_since_analyze | 100\n-[ RECORD 3 ]-------+------------------\nrelname | p_2\nn_mod_since_analyze | 100\n\n(after analyze)\n-[ RECORD 1 ]-------+------------------\nrelname | p\nn_mod_since_analyze | 200\n-[ RECORD 2 ]-------+------------------\nrelname | p_1\nn_mod_since_analyze | 0\n-[ RECORD 3 ]-------+------------------\nrelname | p_2\nn_mod_since_analyze | 0\n\n\nI think if we analyze partition tree in order from leaf partitions\nto root table, this problem can be fixed.\nWhat do you think about it?\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 17 Mar 2020 12:22:05 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Hello,\n\n> > > > + */\n> > > > + if (IsAutoVacuumWorkerProcess() &&\n> > > > + rel->rd_rel->relispartition &&\n> > > > + !(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n> > >\n> > > I'm not sure I understand why we do this only on autovac. Why not all\n> > > analyzes?\n> >\n> > +1. If there is a reason, it should at least be documented in the\n> > comment above.\n> >\n> When we analyze partitioned table by ANALYZE command,\n> all inheritors including partitioned table are analyzed\n> at the same time. In this case, if we call pgstat_report_partanalyze,\n> partitioned table's changes_since_analyze is updated\n> according to the number of analyzed tuples of partitions\n> as follows. But I think it should be 0.\n>\n> \\d+ p\n> Partitioned table \"public.p\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats\n> target | Description\n> --------+---------+-----------+----------+---------+---------+--------------+-------------\n> i | integer | | | | plain | |\n> Partition key: RANGE (i)\n> Partitions: p_1 FOR VALUES FROM (0) TO (100),\n> p_2 FOR VALUES FROM (100) TO (200)\n>\n> insert into p select * from generate_series(0,199);\n> INSERT 0 200\n>\n> (before analyze)\n> -[ RECORD 1 ]-------+------------------\n> relname | p\n> n_mod_since_analyze | 0\n> -[ RECORD 2 ]-------+------------------\n> relname | p_1\n> n_mod_since_analyze | 100\n> -[ RECORD 3 ]-------+------------------\n> relname | p_2\n> n_mod_since_analyze | 100\n>\n> (after analyze)\n> -[ RECORD 1 ]-------+------------------\n> relname | p\n> n_mod_since_analyze | 200\n> -[ RECORD 2 ]-------+------------------\n> relname | p_1\n> n_mod_since_analyze | 0\n> -[ RECORD 3 ]-------+------------------\n> relname | p_2\n> n_mod_since_analyze | 0\n>\n>\n> I think if we analyze partition tree in order from leaf partitions\n> to root table, this problem can be fixed.\n> What do you think about it?\n>\n\nAttach the new patch fixes the above problem. Also, This patch\nincludes modifications accoring to all comments Alvaro and Amit\nmentioned before in this thread.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Wed, 18 Mar 2020 10:06:45 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "On 2020-Mar-18, yuzuko wrote:\n\n> > I think if we analyze partition tree in order from leaf partitions\n> > to root table, this problem can be fixed.\n> > What do you think about it?\n> \n> Attach the new patch fixes the above problem.\n\nThanks for the new version.\n\nI'm confused about some error messages in the regression test when a\ncolumn is mentioned twice, that changed from mentioning the table named\nin the vacuum command, to mentioning the first partition. Is that\nbecause you changed an lappend() to lcons()? I think you do this so\nthat the counters accumulate for the topmost parent that will be\nprocessed at the end. I'm not sure I like that too much ... I think\nthat needs more thought.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Mar 2020 22:49:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Regarding this patch:\n\n+\t * the ANALYZE message as it resets the partition's changes_since_analze\n=> analyze\n\n+\t * If the relation is a partitioned table, we must add up children's\nchildrens'\n\nThe approach in general:\n\nI see an issue for timeseries data, where only the most recent partition is\nbeing inserted into, and the histogram endpoint is being continuously extended\n(this is our use-case). The most recent partition will be analyzed pretty\noften, and I think it'll be problematic if its parent doesn't get similar\ntreatment. Let's say there are 12 historic, monthly children with 1e6 tuples\neach, and the 13th child has 2e5 tuples (6 days into the month). It's analyzed\nwhen it grows by 20% (1.2 days), but at that point the parent has only grown by\n12x less (~2%) and won't be analyzed until 12x further into the future (14\ndays). Its histogram is 12x longer (geometrically), but the histogram changed\nby just as much (arithmetically). That's an issue for a query over \"the last\nfew days\"; if that's past the end of the histogram bound, the query planner\nwill estimate about ~0 tuples, and tend to give cascades of nested loops. I'm\nbiased, but I'm guessing that's too common a use case to answer that the proper\nfix is to set the parent's analyze_scale_factor=0.0005. I think that suggests\nthat the parent might sometimes need to be analyzed every time any of its\nchildren are. In other cases (like probably any hash partitioning), that'd be\nexcessive, and maybe the default settings shouldn't do that, but I think that\nbehavior ought to be possible, and I think this patch doesn't allow that. \n\nIn the past, I think there's was talk that maybe someone would invent a clever\nway to dynamically combine all the partitions' statistics, so analyzing the\nparent wasn't needed. I think that's easy enough for reltuples, MCV, and I\nthink histogram, but ISTM that ndistinct is simultaneously important to get\nright and hard to do so. It depends on whether it's the partition key, which\nnow can be an arbitrary expression. Extended stats further complicates it,\neven if we didn't aim to dynamically compute extended stats for a parent.\n\nWhile writing this, it occured to me that we could use \"CREATE STATISTICS\" as a\nway to mark a partitioned table (or certain columns) as needing to be handled\nby analyze. I understand \"CREATE STATs\" was intended to (eventually) allow\nimplementing stats on expressions without using \"create index\" as a hack. So\nif it's excessive to automatically analyze a parent table when any of its\nchildren are analyzed, maybe it's less excessive to only do that for parents\nwith a stats object, and only on the given colums. I realize this patch is\nalot less useful if it requires to do anything extra/nondefault, and it's\ndesirable to work without creating a stats object at all. Also, using CREATE\nSTATs would reduces the CPU cost of re-analyzing the entire heirarchy, but\ndoesn't help to reduce the I/O cost, which is significant.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 18 Mar 2020 11:30:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi Alvaro,\nThank you for your comments.\n\n> I'm confused about some error messages in the regression test when a\n> column is mentioned twice, that changed from mentioning the table named\n> in the vacuum command, to mentioning the first partition. Is that\n> because you changed an lappend() to lcons()? I think you do this so\n> that the counters accumulate for the topmost parent that will be\n> processed at the end. I'm not sure I like that too much ... I think\n> that needs more thought.\n>\nI couldn't come up with a solution that counts changes_since_analyze\nprecisely when analyzing partitioned trees by ANALYZE command based on\nthis approach (update all ancestor's changes_since_analyze according to the\nnumber of analyzed tuples of leaf partitions).\n\nSo I tried another approach to run autovacuum on partitioned tables.\nIn this approach, all ancestors' changed_tuples are updated when commiting\ntransactions (at AtEOXact_PgStat) according to the number of inserted/updated/\ndeleted tuples of leaf partitions.\n\nAttach the latest patch. What do you think?\n --\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Tue, 7 Apr 2020 11:34:55 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table"
},
{
"msg_contents": "Not sure if you saw my earlier message ?\n\nI think it ought to be possible to configure this feature such that an\nauto-analyze on any child partition would trigger analyze of the parent. I\nthink that would be important for maintaining accurate stats of the partition\nkey column for many cases involving RANGE-partitioned tables, which are likely\nto rely on histogram rather than MCVs.\n\nOn Wed, Mar 18, 2020 at 11:30:39AM -0500, Justin Pryzby wrote:\n> Regarding this patch:\n> \n> +\t * the ANALYZE message as it resets the partition's changes_since_analze\n> => analyze\n> \n> +\t * If the relation is a partitioned table, we must add up children's\n> childrens'\n> \n> The approach in general:\n> \n> I see an issue for timeseries data, where only the most recent partition is\n> being inserted into, and the histogram endpoint is being continuously extended\n> (this is our use-case). The most recent partition will be analyzed pretty\n> often, and I think it'll be problematic if its parent doesn't get similar\n> treatment. Let's say there are 12 historic, monthly children with 1e6 tuples\n> each, and the 13th child has 2e5 tuples (6 days into the month). It's analyzed\n> when it grows by 20% (1.2 days), but at that point the parent has only grown by\n> 12x less (~2%) and won't be analyzed until 12x further into the future (14\n> days). Its histogram is 12x longer (geometrically), but the histogram changed\n> by just as much (arithmetically). That's an issue for a query over \"the last\n> few days\"; if that's past the end of the histogram bound, the query planner\n> will estimate about ~0 tuples, and tend to give cascades of nested loops. I'm\n> biased, but I'm guessing that's too common a use case to answer that the proper\n> fix is to set the parent's analyze_scale_factor=0.0005. I think that suggests\n> that the parent might sometimes need to be analyzed every time any of its\n> children are. In other cases (like probably any hash partitioning), that'd be\n> excessive, and maybe the default settings shouldn't do that, but I think that\n> behavior ought to be possible, and I think this patch doesn't allow that. \n> \n> In the past, I think there's was talk that maybe someone would invent a clever\n> way to dynamically combine all the partitions' statistics, so analyzing the\n> parent wasn't needed. I think that's easy enough for reltuples, MCV, and I\n> think histogram, but ISTM that ndistinct is simultaneously important to get\n> right and hard to do so. It depends on whether it's the partition key, which\n> now can be an arbitrary expression. Extended stats further complicates it,\n> even if we didn't aim to dynamically compute extended stats for a parent.\n> \n> While writing this, it occured to me that we could use \"CREATE STATISTICS\" as a\n> way to mark a partitioned table (or certain columns) as needing to be handled\n> by analyze. I understand \"CREATE STATs\" was intended to (eventually) allow\n> implementing stats on expressions without using \"create index\" as a hack. So\n> if it's excessive to automatically analyze a parent table when any of its\n> children are analyzed, maybe it's less excessive to only do that for parents\n> with a stats object, and only on the given colums. I realize this patch is\n> alot less useful if it requires to do anything extra/nondefault, and it's\n> desirable to work without creating a stats object at all. Also, using CREATE\n> STATs would reduces the CPU cost of re-analyzing the entire heirarchy, but\n> doesn't help to reduce the I/O cost, which is significant.\n> \n> -- \n> Justin\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n",
"msg_date": "Mon, 6 Apr 2020 22:32:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi Justin,\n\nThank you for commens.\n\nOn Tue, Apr 7, 2020 at 12:32 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Not sure if you saw my earlier message ?\n>\nI'm sorry, I didn't notice for a while.\n\n> I think it ought to be possible to configure this feature such that an\n> auto-analyze on any child partition would trigger analyze of the parent. I\n> think that would be important for maintaining accurate stats of the partition\n> key column for many cases involving RANGE-partitioned tables, which are likely\n> to rely on histogram rather than MCVs.\n>\nI read your previous email and understand that it would be neccesary to analyze\npartitioned tables automatically when any of its children are analyzed. In my\nfirst patch, auto-analyze on partitioned tables worked like this but there were\nsome comments about performance of autovacuum, especially when partitioned\ntables have a lot of children.\n\nThe latest patch lets users set different autovacuum configuration for\neach partitioned\ntables like this,\n create table p3(i int) partition by range(i) with\n (autovacuum_analyze_scale_factor=0.0005, autovacuum_analyze_threshold=100);\nso users can configure those parameters according to partitioning strategies\nand other requirements.\n\nSo I think this patch can solve problem you mentioned.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Apr 2020 18:16:45 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 06:16:45PM +0900, yuzuko wrote:\n> > I think it ought to be possible to configure this feature such that an\n> > auto-analyze on any child partition would trigger analyze of the parent. I\n> > think that would be important for maintaining accurate stats of the partition\n> > key column for many cases involving RANGE-partitioned tables, which are likely\n> > to rely on histogram rather than MCVs.\n>\n> I read your previous email and understand that it would be neccesary to analyze\n> partitioned tables automatically when any of its children are analyzed. In my\n> first patch, auto-analyze on partitioned tables worked like this but there were\n> some comments about performance of autovacuum, especially when partitioned\n> tables have a lot of children.\n\nI reread that part. There was also confusion between autovacuum vacuum and\nautovacuum analyze.\n\nI agree that it *might* be a problem to analyze the parent every time any child\nis analyzed.\n\nBut it might also be what's needed for this feature to be useful.\n\n> The latest patch lets users set different autovacuum configuration for\n> each partitioned\n> tables like this,\n> create table p3(i int) partition by range(i) with\n> (autovacuum_analyze_scale_factor=0.0005, autovacuum_analyze_threshold=100);\n> so users can configure those parameters according to partitioning strategies\n> and other requirements.\n> \n> So I think this patch can solve problem you mentioned.\n\nI don't think that adequately allows what's needed.\n\nI think it out to be possible to get the \"analyze parent whenever a child is\nanalyzed\" behavior easily, without having to compute new thershold parameters\nevery time one adds partitions, detaches partitions, loades 10x more data into\none of the partitions, load only 10% as much data into the latest partition,\netc.\n\nFor example, say a new customer has bunch of partitioned tables which each\ncurrently have only one partition (for the current month), and that's expected\nto grow to at least 20+ partitions (2+ years of history). How does one set the\npartitioned table's auto-analyze parameters to analyze whenever any child is\nanalyzed ? I don't think it should be needed to update it every month after\ncomputing sum(child tuples).\n\nPossibly you could allow that behavior for some special values of the\nthreshold. Like if autovacuum_analyze_threshold=-2, then analyze the parent\nwhenever any of its children are analyzed.\n\nI think that use case and that need would be common, but I'd like to hear what\nothers think.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 16 Apr 2020 09:19:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 11:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, Apr 16, 2020 at 06:16:45PM +0900, yuzuko wrote:\n> > The latest patch lets users set different autovacuum configuration for\n> > each partitioned\n> > tables like this,\n> > create table p3(i int) partition by range(i) with\n> > (autovacuum_analyze_scale_factor=0.0005, autovacuum_analyze_threshold=100);\n> > so users can configure those parameters according to partitioning strategies\n> > and other requirements.\n> >\n> > So I think this patch can solve problem you mentioned.\n>\n> I don't think that adequately allows what's needed.\n>\n> I think it out to be possible to get the \"analyze parent whenever a child is\n> analyzed\" behavior easily, without having to compute new thershold parameters\n> every time one adds partitions, detaches partitions, loades 10x more data into\n> one of the partitions, load only 10% as much data into the latest partition,\n> etc.\n>\n> For example, say a new customer has bunch of partitioned tables which each\n> currently have only one partition (for the current month), and that's expected\n> to grow to at least 20+ partitions (2+ years of history). How does one set the\n> partitioned table's auto-analyze parameters to analyze whenever any child is\n> analyzed ? I don't think it should be needed to update it every month after\n> computing sum(child tuples).\n>\n> Possibly you could allow that behavior for some special values of the\n> threshold. Like if autovacuum_analyze_threshold=-2, then analyze the parent\n> whenever any of its children are analyzed.\n>\n> I think that use case and that need would be common, but I'd like to hear what\n> others think.\n\nHaving to constantly pay attention to whether a parent's\nanalyze_threshold/scale_factor is working as intended would surely be\nan annoyance, so I tend to agree that we might need more than just the\nability to set analyze_threshold/scale_factor on parent tables.\nHowever, I think we can at least start with being able to do\n*something* here. :) Maybe others think that this shouldn't be\nconsidered committable until we figure out a good analyze threshold\ncalculation formula to apply to parent tables.\n\nFor the cases in which parent's tuple count grows at about the same\nrate as partitions (hash mainly), I guess the existing formula more or\nless works. That is, we can set the parent's threshold/scale_factor\nsame as partitions' and the autovacuum's existing formula will ensure\nthat the parent is auto-analyzed in time and not more than needed. For\ntime-series partitioning, the same formula won't work, as you have\ndetailed in your comments. Is there any other partitioning pattern for\nwhich the current formula won't work?\n\nConsidering that, how about having, say, a\nautovacuum_analyze_partition_parent_frequency, with string values\n'default', 'partition'? -- 'default' assumes the same formula as\nregular tables, whereas with 'partition', parent is analyzed as soon\nas a partition is.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 Apr 2020 22:09:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Fri, Apr 17, 2020 at 10:09:07PM +0900, Amit Langote wrote:\n> On Thu, Apr 16, 2020 at 11:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Thu, Apr 16, 2020 at 06:16:45PM +0900, yuzuko wrote:\n> > I don't think that adequately allows what's needed.\n...(paragraph with my typos elided)...\n> > For example, say a new customer has bunch of partitioned tables which each\n> > currently have only one partition (for the current month), and that's expected\n> > to grow to at least 20+ partitions (2+ years of history). How does one set the\n> > partitioned table's auto-analyze parameters to analyze whenever any child is\n> > analyzed ? I don't think it should be needed to update it every month after\n> > computing sum(child tuples).\n> >\n> > Possibly you could allow that behavior for some special values of the\n> > threshold. Like if autovacuum_analyze_threshold=-2, then analyze the parent\n> > whenever any of its children are analyzed.\n> >\n> > I think that use case and that need would be common, but I'd like to hear what\n> > others think.\n> \n> Having to constantly pay attention to whether a parent's\n> analyze_threshold/scale_factor is working as intended would surely be\n> an annoyance, so I tend to agree that we might need more than just the\n> ability to set analyze_threshold/scale_factor on parent tables.\n> However, I think we can at least start with being able to do\n> *something* here. :) Maybe others think that this shouldn't be\n> considered committable until we figure out a good analyze threshold\n> calculation formula to apply to parent tables.\n> \n> Considering that, how about having, say, a\n> autovacuum_analyze_partition_parent_frequency, with string values\n> 'default', 'partition'? -- 'default' assumes the same formula as\n> regular tables, whereas with 'partition', parent is analyzed as soon\n> as a partition is.\n\nI assume you mean a reloption to be applied only to partitioned tables,\n\nYour \"partition\" setting would mean that the scale/threshold values would have\nno effect, which seems kind of unfortunate.\n\nI think it should be called something else, and done differently, like maybe:\nautovacuum_analyze_mode = {off,sum,max,...}\n\nThe threshold would be threshold + scale*tuples, as always, but would be\ncompared with f(changes) as determined by the relopt.\n\nsum(changes) would do what you called \"default\", comparing the sum(changes)\nacross all partitions to the threshold, which is itself computed using\nsum(reltuples) AS reltuples.\n\nmax(changes) would compute max(changes) compared to the threshold, and the\nthreshold would be computed separately for each partition's reltuples:\nthreshold_N = parent_threshold + parent_scale * part_N_tuples. If *any*\npartition exceeds that threshold, the partition itself is analyzed. This\nallows what I want for time-series. Maybe this would have an alias called\n\"any\".\n\nI'm not sure if there's any other useful modes, like avg(changes)? I guess we\ncan add them later if someone thinks of a good use case.\n\nAlso, for me, the v7 patch warns:\n|src/backend/postmaster/autovacuum.c:3117:70: warning: ‘reltuples’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n| vacinsthresh = (float4) vac_ins_base_thresh + vac_ins_scale_factor * reltuples;\n..which seems to be a false positive, but easily avoided.\n\n\nThis patch includes partitioned tables in pg_stat_*_tables, which is great; I\ncomplained awhile ago that they were missing [0]. It might be useful if that\npart was split out into a separate 0001 patch (?).\n\nThanks,\n-- \nJustin\n\n[0] https://www.postgresql.org/message-id/20180601221428.GU5164%40telsasoft.com\n\n\n",
"msg_date": "Sat, 18 Apr 2020 00:08:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hello,\n\nOn Sat, Apr 18, 2020 at 2:08 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Apr 17, 2020 at 10:09:07PM +0900, Amit Langote wrote:\n> > On Thu, Apr 16, 2020 at 11:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Thu, Apr 16, 2020 at 06:16:45PM +0900, yuzuko wrote:\n> > > I don't think that adequately allows what's needed.\n> ...(paragraph with my typos elided)...\n> > > For example, say a new customer has bunch of partitioned tables which each\n> > > currently have only one partition (for the current month), and that's expected\n> > > to grow to at least 20+ partitions (2+ years of history). How does one set the\n> > > partitioned table's auto-analyze parameters to analyze whenever any child is\n> > > analyzed ? I don't think it should be needed to update it every month after\n> > > computing sum(child tuples).\n> > >\n> > > Possibly you could allow that behavior for some special values of the\n> > > threshold. Like if autovacuum_analyze_threshold=-2, then analyze the parent\n> > > whenever any of its children are analyzed.\n> > >\n> > > I think that use case and that need would be common, but I'd like to hear what\n> > > others think.\n> >\n> > Having to constantly pay attention to whether a parent's\n> > analyze_threshold/scale_factor is working as intended would surely be\n> > an annoyance, so I tend to agree that we might need more than just the\n> > ability to set analyze_threshold/scale_factor on parent tables.\n> > However, I think we can at least start with being able to do\n> > *something* here. :) Maybe others think that this shouldn't be\n> > considered committable until we figure out a good analyze threshold\n> > calculation formula to apply to parent tables.\n> >\n> > Considering that, how about having, say, a\n> > autovacuum_analyze_partition_parent_frequency, with string values\n> > 'default', 'partition'? -- 'default' assumes the same formula as\n> > regular tables, whereas with 'partition', parent is analyzed as soon\n> > as a partition is.\n>\n> I assume you mean a reloption to be applied only to partitioned tables,\n>\n> Your \"partition\" setting would mean that the scale/threshold values would have\n> no effect, which seems kind of unfortunate.\n>\n> I think it should be called something else, and done differently, like maybe:\n> autovacuum_analyze_mode = {off,sum,max,...}\n>\nThe above reloption you suggested will be applied all tables?\nUsers might not use it for partitions, so I think we should add \"parent\"\nto reloption's name, like Amit's suggestion.\n\n> The threshold would be threshold + scale*tuples, as always, but would be\n> compared with f(changes) as determined by the relopt.\n>\n> sum(changes) would do what you called \"default\", comparing the sum(changes)\n> across all partitions to the threshold, which is itself computed using\n> sum(reltuples) AS reltuples.\n>\n> max(changes) would compute max(changes) compared to the threshold, and the\n> threshold would be computed separately for each partition's reltuples:\n> threshold_N = parent_threshold + parent_scale * part_N_tuples. If *any*\n> partition exceeds that threshold, the partition itself is analyzed. This\n> allows what I want for time-series. Maybe this would have an alias called\n> \"any\".\n>\nI may be wrong but I think the fomula,\n> threshold_N = parent_threshold + parent_scale * part_N_tuples\nwould use orginary table's threshold, not parent's. If it use parent_threshold,\nparent might not be analyzed even if its any partition is analyzed when\nparent_threshold is larger than normal threshold. I'm worried that this case\nmeets requirements for time-series.\n\n> I'm not sure if there's any other useful modes, like avg(changes)? I guess we\n> can add them later if someone thinks of a good use case.\n>\n> Also, for me, the v7 patch warns:\n> |src/backend/postmaster/autovacuum.c:3117:70: warning: ‘reltuples’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n> | vacinsthresh = (float4) vac_ins_base_thresh + vac_ins_scale_factor * reltuples;\n> ..which seems to be a false positive, but easily avoided.\n>\nThank you for testing the patch.\nI got it. I'll update the patch soon.\n\n>\n> This patch includes partitioned tables in pg_stat_*_tables, which is great; I\n> complained awhile ago that they were missing [0]. It might be useful if that\n> part was split out into a separate 0001 patch (?).\n>\nIf partitioned table's statistics is used for other purposes, I think\nit would be\nbetter to split the patch. Does anyone have any opinion?\n\n---\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 22 Apr 2020 01:21:04 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Wed, Mar 18, 2020 at 11:30:39AM -0500, Justin Pryzby wrote:\n> In the past, I think there's was talk that maybe someone would invent a clever\n> way to dynamically combine all the partitions' statistics, so analyzing the\n> parent wasn't needed. [...]\n\nI happened across the thread I was referring to:\nhttps://www.postgresql.org/message-id/7363.1426537103@sss.pgh.pa.us\n\nI'm not opposed to doing things the currently-proposed way (trigger analyze of\npartitioned tables based on updates, same as nonpartitioned tables), but we\nshould think if it's worth doing something totally different, like what Tom\nproposed.\n\nRobert had concerns that it would increase planning time. I imagine that\nargument is even stronger now, since PG12 has *less* planning time for large\nheirarchies (428b260f8) and advertizes support for \"thousands\" of partitions.\n\nTom said:\n> we would automatically get statistics that account for\n> partitions being eliminated by constraint exclusion, because only the\n> non-eliminated partitions are present in the appendrel. And second,\n\nThat's a pretty strong benefit. I don't know if there's a good way to support\nboth(either) ways of doing things. Like maybe a reloption that allows\ntriggering autovacuum on partitioned tables, but if no statistics exist on a\npartitioned table, then the planner would dynamically determine the selectivity\nby decending into child statistics (Tom's way). I think the usual way this\nwould play out is that someone with a small partition heirarchies would\neventually complain about high planning time and then we'd suggest implementing\na manual ANALYZE job.\n\nI'm not sure it's good to support two ways anyway, since 1) I think that gives\ndifferent (better) statistics Tom's way (due to excluding stats of excluded\npartitions); 2) there's not a good way to put an ANALYZE job in place and then\nget rid of parent stats (have to DELETE FROM pg_statistic WHERE\nstarelid='...'::regclass; 3) if someone implements an ANALYZE job, but they\ndisable it or it stops working then they have outdated stats forever; \n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 Apr 2020 09:13:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Sat, Apr 25, 2020 at 11:13 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Mar 18, 2020 at 11:30:39AM -0500, Justin Pryzby wrote:\n> > In the past, I think there's was talk that maybe someone would invent a clever\n> > way to dynamically combine all the partitions' statistics, so analyzing the\n> > parent wasn't needed. [...]\n>\n> I happened across the thread I was referring to:\n> https://www.postgresql.org/message-id/7363.1426537103@sss.pgh.pa.us\n>\n> I'm not opposed to doing things the currently-proposed way (trigger analyze of\n> partitioned tables based on updates, same as nonpartitioned tables), but we\n> should think if it's worth doing something totally different, like what Tom\n> proposed.\n>\n> Robert had concerns that it would increase planning time. I imagine that\n> argument is even stronger now, since PG12 has *less* planning time for large\n> heirarchies (428b260f8) and advertizes support for \"thousands\" of partitions.\n>\n> Tom said:\n> > we would automatically get statistics that account for\n> > partitions being eliminated by constraint exclusion, because only the\n> > non-eliminated partitions are present in the appendrel. And second,\n>\n> That's a pretty strong benefit. I don't know if there's a good way to support\n> both(either) ways of doing things. Like maybe a reloption that allows\n> triggering autovacuum on partitioned tables, but if no statistics exist on a\n> partitioned table, then the planner would dynamically determine the selectivity\n> by decending into child statistics (Tom's way). I think the usual way this\n> would play out is that someone with a small partition heirarchies would\n> eventually complain about high planning time and then we'd suggest implementing\n> a manual ANALYZE job.\n>\n> I'm not sure it's good to support two ways anyway, since 1) I think that gives\n> different (better) statistics Tom's way (due to excluding stats of excluded\n> partitions); 2) there's not a good way to put an ANALYZE job in place and then\n> get rid of parent stats (have to DELETE FROM pg_statistic WHERE\n> starelid='...'::regclass; 3) if someone implements an ANALYZE job, but they\n> disable it or it stops working then they have outdated stats forever;\n\nThanks for sharing that thread, had not seen it before.\n\nI remember discussing with Alvaro and Hosoya-san an approach of\ngenerating the whole-tree pg_statistics entries by combining the\nchildren's entries, not during planning as the linked thread\ndiscusses, but inside autovacuum. The motivation for that design was\nthe complaint that we scan the children twice with the current method\nof generating whole-tree statistics -- first to generate their own\nstatistics and then again to generate the parent's.\n\nAside from how hard it would be to actually implement, that approach\nalso doesn't address the concern about when to generate the whole-tree\nstatistics. Because the linked thread mentions getting rid of the\nwhole-tree statistics altogether, there is no such concern if we go\nits way. Although I do agree with Robert's assertion on that thread\nthat making every query on a parent a bit slower would not be a good\ncompromise.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Apr 2020 15:56:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "> On 21 Apr 2020, at 18:21, yuzuko <yuzukohosoya@gmail.com> wrote:\n\n> I'll update the patch soon.\n\nDo you have an updated version to submit? The previous patch no longer applies\nto HEAD, so I'm marking this entry Waiting on Author in the meantime.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 1 Jul 2020 11:26:44 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "> On Wed, Jul 1, 2020 at 6:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 21 Apr 2020, at 18:21, yuzuko <yuzukohosoya@gmail.com> wrote:\n>\n> > I'll update the patch soon.\n>\n> Do you have an updated version to submit? The previous patch no longer applies\n> to HEAD, so I'm marking this entry Waiting on Author in the meantime.\n>\nThank you for letting me know.\nI attach the latest patch applies to HEAD.\n\nI think there are other approaches like Tom's idea that Justin previously\nreferenced, but this patch works the same way as previous patches.\n(tracks updated/inserted/deleted tuples and checks whether the partitioned\ntables needs auto-analyze, same as nonpartitioned tables)\nBecause I wanted to be able to analyze partitioned tables by autovacuum\nas a first step, and I think this approach is the simplest way to do it.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Mon, 6 Jul 2020 19:35:37 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "> On 6 Jul 2020, at 12:35, yuzuko <yuzukohosoya@gmail.com> wrote:\n> \n>> On Wed, Jul 1, 2020 at 6:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 21 Apr 2020, at 18:21, yuzuko <yuzukohosoya@gmail.com> wrote:\n>> \n>>> I'll update the patch soon.\n>> \n>> Do you have an updated version to submit? The previous patch no longer applies\n>> to HEAD, so I'm marking this entry Waiting on Author in the meantime.\n>> \n> Thank you for letting me know.\n> I attach the latest patch applies to HEAD.\n\nThis version seems to fail under Werror which is used in the Travis builds:\n\nautovacuum.c: In function ‘relation_needs_vacanalyze’:\nautovacuum.c:3117:59: error: ‘reltuples’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;\n ^\nautovacuum.c:2972:9: note: ‘reltuples’ was declared here\n float4 reltuples; /* pg_class.reltuples */\n ^\n\nI've moved this patch to the next commitfest, but kept the status as Waiting on\nAuthor. Please submit a new version of the patch.\n\ncheers ./daniel\n\n",
"msg_date": "Sat, 1 Aug 2020 23:50:59 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "I'm sorry for the late reply.\n\n> This version seems to fail under Werror which is used in the Travis builds:\n>\n> autovacuum.c: In function ‘relation_needs_vacanalyze’:\n> autovacuum.c:3117:59: error: ‘reltuples’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;\n> ^\n> autovacuum.c:2972:9: note: ‘reltuples’ was declared here\n> float4 reltuples; /* pg_class.reltuples */\n> ^\n>\n\nI attach the latest patch that solves the above Werror.\nCould you please check it again?\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Mon, 17 Aug 2020 15:11:28 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "> On 17 Aug 2020, at 08:11, yuzuko <yuzukohosoya@gmail.com> wrote:\n> \n> I'm sorry for the late reply.\n> \n>> This version seems to fail under Werror which is used in the Travis builds:\n>> \n>> autovacuum.c: In function ‘relation_needs_vacanalyze’:\n>> autovacuum.c:3117:59: error: ‘reltuples’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n>> anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;\n>> ^\n>> autovacuum.c:2972:9: note: ‘reltuples’ was declared here\n>> float4 reltuples; /* pg_class.reltuples */\n>> ^\n> \n> I attach the latest patch that solves the above Werror.\n> Could you please check it again?\n\nThis version now pass the tests in the Travis pipeline as can be seen in the\nlink below, and is ready to be reviewed in the upcoming commitfest:\n\n\thttp://cfbot.cputube.org/yuzuko-hosoya.html\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 25 Aug 2020 14:28:20 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "At Tue, 25 Aug 2020 14:28:20 +0200, Daniel Gustafsson <daniel@yesql.se> wrote in \n> > I attach the latest patch that solves the above Werror.\n> > Could you please check it again?\n> \n> This version now pass the tests in the Travis pipeline as can be seen in the\n> link below, and is ready to be reviewed in the upcoming commitfest:\n> \n> \thttp://cfbot.cputube.org/yuzuko-hosoya.html\n\nAt Mon, 6 Jul 2020 19:35:37 +0900, yuzuko <yuzukohosoya@gmail.com> wrote in \n> I think there are other approaches like Tom's idea that Justin previously\n> referenced, but this patch works the same way as previous patches.\n> (tracks updated/inserted/deleted tuples and checks whether the partitioned\n> tables needs auto-analyze, same as nonpartitioned tables)\n> Because I wanted to be able to analyze partitioned tables by autovacuum\n> as a first step, and I think this approach is the simplest way to do it.\n\nI'm not sure if anything bad happen if parent and children are not\nagree on statistics.\n\nThe requirement suggested here seems to be:\n\n- We want to update parent's stats when any of its children gets its\n stats updated. This is curucial especially for time-series\n partitioning.\n\n- However, we don't want analyze the whole-tree every time any of the\n children was analyzed.\n\nTo achieve the both, stats-merging seems to the optimal solution.\n\nPutting that aside, I had a brief look on the latest patch.\n\n \t/* We only count stats for things that have storage */\n-\tif (!RELKIND_HAS_STORAGE(relkind))\n+\tif (!RELKIND_HAS_STORAGE(relkind) ||\n+\t\trelkind == RELKIND_PARTITIONED_TABLE)\n \t{\n \t\trel->pgstat_info = NULL;\n\nRELKIND_HAS_STORAGE(RELKIND_PARTITIONED_TABLE) is already false.\nMaybe you wanted to do \"&& relkind !=\" instead:p\n\n\n+\t\t/*\n+\t\t * If this relation is partitioned, we store all ancestors' oid\n+\t\t * to propagate its changed_tuples to their parents when this\n+\t\t * transaction is committed.\n+\t\t */\n+\t\tif (rel->rd_rel->relispartition && pgstat_info->ancestors == NULL)\n\nIf the relation was detached then attached to another partition within\na transaction, the ancestor list would get stale and the succeeding\nmodification to the relation propagates into wrong ancestors.\n\nI think vacuum time is more appropriate to modify ancestors stats. It\nseems to me that what Alvalo pointed isthe list-order-susceptible\nmanner of collecting children's modified tuples.\n\n\n+\t\t? 0 /* partitioned tables don't have any data, so it's 0 */\n\nIf the comment is true, we shouldn't have non-zero t_changed_tuples,\ntoo. I think the reason for the lines is something different.\n\n# Oops! Time's up now.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 15 Sep 2020 19:01:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": " Horiguchi-san,\n\nThank you for reviewing.\n\n\nOn Tue, Sep 15, 2020 at 7:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 25 Aug 2020 14:28:20 +0200, Daniel Gustafsson <daniel@yesql.se> wrote in\n> > > I attach the latest patch that solves the above Werror.\n> > > Could you please check it again?\n> >\n> > This version now pass the tests in the Travis pipeline as can be seen in the\n> > link below, and is ready to be reviewed in the upcoming commitfest:\n> >\n> > http://cfbot.cputube.org/yuzuko-hosoya.html\n>\n> At Mon, 6 Jul 2020 19:35:37 +0900, yuzuko <yuzukohosoya@gmail.com> wrote in\n> > I think there are other approaches like Tom's idea that Justin previously\n> > referenced, but this patch works the same way as previous patches.\n> > (tracks updated/inserted/deleted tuples and checks whether the partitioned\n> > tables needs auto-analyze, same as nonpartitioned tables)\n> > Because I wanted to be able to analyze partitioned tables by autovacuum\n> > as a first step, and I think this approach is the simplest way to do it.\n>\n> I'm not sure if anything bad happen if parent and children are not\n> agree on statistics.\n>\n> The requirement suggested here seems to be:\n>\n> - We want to update parent's stats when any of its children gets its\n> stats updated. This is curucial especially for time-series\n> partitioning.\n>\n> - However, we don't want analyze the whole-tree every time any of the\n> children was analyzed.\n>\n> To achieve the both, stats-merging seems to the optimal solution.\n>\n> Putting that aside, I had a brief look on the latest patch.\n>\n> /* We only count stats for things that have storage */\n> - if (!RELKIND_HAS_STORAGE(relkind))\n> + if (!RELKIND_HAS_STORAGE(relkind) ||\n> + relkind == RELKIND_PARTITIONED_TABLE)\n> {\n> rel->pgstat_info = NULL;\n>\n> RELKIND_HAS_STORAGE(RELKIND_PARTITIONED_TABLE) is already false.\n> Maybe you wanted to do \"&& relkind !=\" instead:p\n>\nOh, indeed. I'll fix it.\n\n>\n> + /*\n> + * If this relation is partitioned, we store all ancestors' oid\n> + * to propagate its changed_tuples to their parents when this\n> + * transaction is committed.\n> + */\n> + if (rel->rd_rel->relispartition && pgstat_info->ancestors == NULL)\n>\n> If the relation was detached then attached to another partition within\n> a transaction, the ancestor list would get stale and the succeeding\n> modification to the relation propagates into wrong ancestors.\n>\n> I think vacuum time is more appropriate to modify ancestors stats. It\n> seems to me that what Alvalo pointed isthe list-order-susceptible\n> manner of collecting children's modified tuples.\n>\nI proposed a patch that modified ancestors stats when vacuuming previously.\nIn that time, having been pointed out by Alvaro and Amit, I tried to update the\nparents' changes_since_analyze in every ANALYZE. However, in that case,\nthe problem mentioned in [1] occurred, but I could not find a way to avoid it.\nI think that it can be solved by updating the parents' changes_since_analyze\nonly in the case of auto analyze, but what do you think?\n\n>\n> + ? 0 /* partitioned tables don't have any data, so it's 0 */\n>\n> If the comment is true, we shouldn't have non-zero t_changed_tuples,\n> too. I think the reason for the lines is something different.\n>\nYes, surely. I think updating the values of live_tuples and dead_tuples\nis confusing for users. I'll consider another comment.\n\n\n[1] https://www.postgresql.org/message-id/CAKkQ50-bwFEDMBGb1JmDXffXsiU8xk-hN6kJK9CKjdBa7r%3DHdw%40mail.gmail.com\n--\nBest regards,\nYuzuko Hosoya\n\n\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Sep 2020 17:12:36 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hello,\n\nI reconsidered a way based on the v5 patch in line with\nHoriguchi-san's comment.\n\nThis approach is as follows:\n- A partitioned table is checked whether it needs analyze like a plain\n table in relation_needs_vacanalyze(). To do this, we should store\n partitioned table's stats (changes_since_analyze).\n- Partitioned table's changes_since_analyze is updated when\n analyze a leaf partition by propagating its changes_since_analyze.\n In the next scheduled analyze time, it is used in the above process.\n That is, the partitioned table is analyzed behind leaf partitions.\n- The propagation process differs between autoanalyze or plain analyze.\n In autoanalyze, a leaf partition's changes_since_analyze is propagated\n to *all* ancestors. Whereas, in plain analyze on an inheritance tree,\n propagates to ancestors not included the tree to avoid needless counting.\n\nAttach the latest patch to this email.\nCould you check it again please?\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Fri, 23 Oct 2020 15:12:51 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Thanks you for the new version.\n\nAt Fri, 23 Oct 2020 15:12:51 +0900, yuzuko <yuzukohosoya@gmail.com> wrote in \n> Hello,\n> \n> I reconsidered a way based on the v5 patch in line with\n> Horiguchi-san's comment.\n> \n> This approach is as follows:\n> - A partitioned table is checked whether it needs analyze like a plain\n> table in relation_needs_vacanalyze(). To do this, we should store\n> partitioned table's stats (changes_since_analyze).\n> - Partitioned table's changes_since_analyze is updated when\n> analyze a leaf partition by propagating its changes_since_analyze.\n> In the next scheduled analyze time, it is used in the above process.\n> That is, the partitioned table is analyzed behind leaf partitions.\n> - The propagation process differs between autoanalyze or plain analyze.\n> In autoanalyze, a leaf partition's changes_since_analyze is propagated\n> to *all* ancestors. Whereas, in plain analyze on an inheritance tree,\n> propagates to ancestors not included the tree to avoid needless counting.\n> \n> Attach the latest patch to this email.\n> Could you check it again please?\n\n+\t\t/*\n+\t\t * Get its all ancestors to propagate changes_since_analyze count.\n+\t\t * However, when ANALYZE inheritance tree, we get ancestors of\n+\t\t * toprel_oid to avoid needless counting.\n+\t\t */\n+\t\tif (!OidIsValid(toprel_oid))\n+\t\t\tancestors = get_partition_ancestors(RelationGetRelid(rel));\n+\t\telse\n+\t\t\tancestors = get_partition_ancestors(toprel_oid);\n\nThis comment doesn't explaining what the code intends but what the\ncode does.\n\nThe reason for the difference is that if we have a valid toprel_oid,\nwe analyze all descendants of the relation this time, and if we\npropagate the number to the descendants of the top relation, the next\nanalyze on the relations could happen shortly than expected.\n\n\n-\tmsg.m_live_tuples = livetuples;\n-\tmsg.m_dead_tuples = deadtuples;\n+\tmsg.m_live_tuples = (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+\t\t? 0 /* if this is a partitioned table, skip modifying */\n+\t\t: livetuples;\n+\tmsg.m_dead_tuples = (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+\t\t? 0 /* if this is a partitioned table, skip modifying */\n+\t\t: deadtuples;\n\nTwo successive branching with the same condition looks odd. And we\nneed an explanation of *why* we don't set the values for partitioned\ntables.\n\n+\t\tforeach(lc, ancestors)\n+\t\t{\n+\t\t\tOid parentOid = lfirst_oid(lc);\n+\t\t\tRelation parentrel;\n+\n+\t\t\tparentrel = table_open(parentOid, AccessShareLock);\n\nI'm not sure, but all of the ancestors always cannot be a parent (in\nother words, a parent of a parent of mine is not a parent of\nmine). Isn't just rel sufficient?\n\n\n-\t * Report ANALYZE to the stats collector, too. However, if doing\n-\t * inherited stats we shouldn't report, because the stats collector only\n-\t * tracks per-table stats. Reset the changes_since_analyze counter only\n-\t * if we analyzed all columns; otherwise, there is still work for\n-\t * auto-analyze to do.\n+\t * Report ANALYZE to the stats collector, too. Reset the\n+\t * changes_since_analyze counter only if we analyzed all columns;\n+\t * otherwise, there is still work for auto-analyze to do.\n \t */\n-\tif (!inh)\n+\tif (!inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n \t\tpgstat_report_analyze(onerel, totalrows, totaldeadrows,\n\nThis still rejects traditional inheritance nonleaf relations. But if\nwe remove the description about that completely in the comment above,\nwe should support traditional inheritance parents here. I think we\ncan do that as far as we need to propagate only per-tuple stats (that\nmans not per-attribute) like changes_since_analyze.\n\nWhichever way we take, do we need the description about the behavior\nin the documentation?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 23 Oct 2020 20:23:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Fri, Oct 23, 2020 at 03:12:51PM +0900, yuzuko wrote:\n> Hello,\n> \n> I reconsidered a way based on the v5 patch in line with\n> Horiguchi-san's comment.\n> \n> This approach is as follows:\n> - A partitioned table is checked whether it needs analyze like a plain\n> table in relation_needs_vacanalyze(). To do this, we should store\n> partitioned table's stats (changes_since_analyze).\n> - Partitioned table's changes_since_analyze is updated when\n> analyze a leaf partition by propagating its changes_since_analyze.\n> In the next scheduled analyze time, it is used in the above process.\n> That is, the partitioned table is analyzed behind leaf partitions.\n> - The propagation process differs between autoanalyze or plain analyze.\n> In autoanalyze, a leaf partition's changes_since_analyze is propagated\n> to *all* ancestors. Whereas, in plain analyze on an inheritance tree,\n> propagates to ancestors not included the tree to avoid needless counting.\n\n+ * Get its all ancestors to propagate changes_since_analyze count.\n+ * However, when ANALYZE inheritance tree, we get ancestors of\n+ * toprel_oid to avoid needless counting.\n\n=> I don't understand that comment.\n\n+ /* Find all members of inheritance set taking AccessShareLock */\n+ children = find_all_inheritors(relid, AccessShareLock, NULL);\n\n=> Do you know that returns the table itself ? And in pg14dev, each\npartitioned table has reltuples = -1, not zero...\n\n+ /* Skip foreign partitions */\n+ if (childclass->relkind == RELKIND_FOREIGN_TABLE)\n+ continue;\n\n=> Michael's suggrestion is to use RELKIND_HAS_STORAGE to skip both foreign and\npartitioned tables.\n\nAlso, you called SearchSysCacheCopy1, but didn't free the tuple. I don't think\nyou need to copy it anyway - just call ReleaseSysCache().\n\nRegarding the counters in pg_stat_all_tables: maybe some of these should be\nnull rather than zero ? Or else you should make an 0001 patch to fully\nimplement this view, with all relevant counters, not just n_mod_since_analyze,\nlast_*analyze, and *analyze_count. These are specifically misleading:\n\nlast_vacuum | \nlast_autovacuum | \nn_ins_since_vacuum | 0\nvacuum_count | 0\nautovacuum_count | 0\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 26 Oct 2020 22:22:50 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Horiguchi-san,\n\nThank you for your comments.\n\nOn Fri, Oct 23, 2020 at 8:23 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thanks you for the new version.\n>\n> At Fri, 23 Oct 2020 15:12:51 +0900, yuzuko <yuzukohosoya@gmail.com> wrote in\n> > Hello,\n> >\n> > I reconsidered a way based on the v5 patch in line with\n> > Horiguchi-san's comment.\n> >\n> > This approach is as follows:\n> > - A partitioned table is checked whether it needs analyze like a plain\n> > table in relation_needs_vacanalyze(). To do this, we should store\n> > partitioned table's stats (changes_since_analyze).\n> > - Partitioned table's changes_since_analyze is updated when\n> > analyze a leaf partition by propagating its changes_since_analyze.\n> > In the next scheduled analyze time, it is used in the above process.\n> > That is, the partitioned table is analyzed behind leaf partitions.\n> > - The propagation process differs between autoanalyze or plain analyze.\n> > In autoanalyze, a leaf partition's changes_since_analyze is propagated\n> > to *all* ancestors. Whereas, in plain analyze on an inheritance tree,\n> > propagates to ancestors not included the tree to avoid needless counting.\n> >\n> > Attach the latest patch to this email.\n> > Could you check it again please?\n>\n> + /*\n> + * Get its all ancestors to propagate changes_since_analyze count.\n> + * However, when ANALYZE inheritance tree, we get ancestors of\n> + * toprel_oid to avoid needless counting.\n> + */\n> + if (!OidIsValid(toprel_oid))\n> + ancestors = get_partition_ancestors(RelationGetRelid(rel));\n> + else\n> + ancestors = get_partition_ancestors(toprel_oid);\n>\n> This comment doesn't explaining what the code intends but what the\n> code does.\n>\n> The reason for the difference is that if we have a valid toprel_oid,\n> we analyze all descendants of the relation this time, and if we\n> propagate the number to the descendants of the top relation, the next\n> analyze on the relations could happen shortly than expected.\n>\nI modified this comment according to your advice.\n\n>\n> - msg.m_live_tuples = livetuples;\n> - msg.m_dead_tuples = deadtuples;\n> + msg.m_live_tuples = (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> + ? 0 /* if this is a partitioned table, skip modifying */\n> + : livetuples;\n> + msg.m_dead_tuples = (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> + ? 0 /* if this is a partitioned table, skip modifying */\n> + : deadtuples;\n>\n> Two successive branching with the same condition looks odd. And we\n> need an explanation of *why* we don't set the values for partitioned\n> tables.\n\nI moved this part to the previous block that livetuples and deadtuples are set.\nActually, I think the reason why those counters are set 0 when the given\nrelation is a partitioned table is that such a table doesn't have any data.\nAbout changes_since_analyze counter, we should support exceptionally\nin order to check whether partitioned tables need auto analyze.\nI added this description to the comment of this function.\n\n>\n> + foreach(lc, ancestors)\n> + {\n> + Oid parentOid = lfirst_oid(lc);\n> + Relation parentrel;\n> +\n> + parentrel = table_open(parentOid, AccessShareLock);\n>\n> I'm not sure, but all of the ancestors always cannot be a parent (in\n> other words, a parent of a parent of mine is not a parent of\n> mine). Isn't just rel sufficient?\n>\nI changed 'parentrel' to 'rel'.\n\n>\n>\n> - * Report ANALYZE to the stats collector, too. However, if doing\n> - * inherited stats we shouldn't report, because the stats collector only\n> - * tracks per-table stats. Reset the changes_since_analyze counter only\n> - * if we analyzed all columns; otherwise, there is still work for\n> - * auto-analyze to do.\n> + * Report ANALYZE to the stats collector, too. Reset the\n> + * changes_since_analyze counter only if we analyzed all columns;\n> + * otherwise, there is still work for auto-analyze to do.\n> */\n> - if (!inh)\n> + if (!inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> pgstat_report_analyze(onerel, totalrows, totaldeadrows,\n>\n> This still rejects traditional inheritance nonleaf relations. But if\n> we remove the description about that completely in the comment above,\n> we should support traditional inheritance parents here. I think we\n> can do that as far as we need to propagate only per-tuple stats (that\n> mans not per-attribute) like changes_since_analyze.\n>\nRegarding manual ANALYZE, not auto ANALYZE, when analyzing declarative\npartitioning, all children are also analyzed at the same time. However,\nin the case of traditional inheritance, we need to run that command on\neach child table individually. That is, they are not analyzed all together\nby ANALYZE. So I tried to support auto analyze for declarative\npartitioning for now.\nAdded that we only support declarative partitioning to that comment.\n\n> Whichever way we take, do we need the description about the behavior\n> in the documentation?\n>\nAdded a description about this patch to the document.\n\nI attach the latest patch to this email.\nCould you please check it again?\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Thu, 5 Nov 2020 14:04:15 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi Justin,\n\nThank you for your comments.\nI attached the latest patch(v11) to the previous email.\n\n>\n> + * Get its all ancestors to propagate changes_since_analyze count.\n> + * However, when ANALYZE inheritance tree, we get ancestors of\n> + * toprel_oid to avoid needless counting.\n>\n> => I don't understand that comment.\n>\nI fixed that comment.\n\n> + /* Find all members of inheritance set taking AccessShareLock */\n> + children = find_all_inheritors(relid, AccessShareLock, NULL);\n>\n> => Do you know that returns the table itself ? And in pg14dev, each\n> partitioned table has reltuples = -1, not zero...\n>\n> + /* Skip foreign partitions */\n> + if (childclass->relkind == RELKIND_FOREIGN_TABLE)\n> + continue;\n>\n> => Michael's suggrestion is to use RELKIND_HAS_STORAGE to skip both foreign and\n> partitioned tables.\n>\nI overlooked that. Revised that according to your comments.\n\n> Also, you called SearchSysCacheCopy1, but didn't free the tuple. I don't think\n> you need to copy it anyway - just call ReleaseSysCache().\n>\nFixed it.\n\n> Regarding the counters in pg_stat_all_tables: maybe some of these should be\n> null rather than zero ? Or else you should make an 0001 patch to fully\n> implement this view, with all relevant counters, not just n_mod_since_analyze,\n> last_*analyze, and *analyze_count. These are specifically misleading:\n>\n> last_vacuum |\n> last_autovacuum |\n> n_ins_since_vacuum | 0\n> vacuum_count | 0\n> autovacuum_count | 0\n>\nI haven't modified this part yet, but you meant that we should set\nnull to counters\nabout vacuum because partitioned tables are not vacuumed?\n\n--\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 5 Nov 2020 16:03:12 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "At Thu, 5 Nov 2020 16:03:12 +0900, yuzuko <yuzukohosoya@gmail.com> wrote in \n> Hi Justin,\n> \n> Thank you for your comments.\n> I attached the latest patch(v11) to the previous email.\n> \n> >\n> > + * Get its all ancestors to propagate changes_since_analyze count.\n> > + * However, when ANALYZE inheritance tree, we get ancestors of\n> > + * toprel_oid to avoid needless counting.\n> >\n> > => I don't understand that comment.\n> >\n> I fixed that comment.\n\n+\t\t * Get its all ancestors to propagate changes_since_analyze count.\n+\t\t * However, when we have a valid toprel_oid, that is ANALYZE inheritance\n+\t\t * tree, if we propagate the number to all ancestors, the next analyze\n+\t\t * on partitioned tables in the tree could happen shortly expected.\n+\t\t * So we get ancestors of toprel_oid which are not analyzed this time.\n\nIn second thought about the reason for the \"toprel_oid\". It is perhaps\nto avoid \"wrongly\" propagated values to ancestors after a manual\nANALYZE on a partitioned table. But the same happens after an\nautoanalyze iteration if some of the ancestors of a leaf relation are\nanalyzed before the leaf relation in a autoanalyze iteration. That\ncan trigger an unnecessary analyzing for some of the ancestors.\nSo we need to do a similar thing for autovacuum, However..\n\n [1(root):analzye]-[2:DONT analyze]-[3:analyze]-[leaf]\n\nIn this case topre_oid is invalid (since it's autoanalyze) but we\nshould avoid propagating the count to 1 and 3 if it is processed\n*before* the leaf, but should propagate to 2. toprel_oid doesn't work\nin that case.\n\nSo, to propagate the count properly, we need to analyze relations\nleaf-to-root order, or propagate the counter only to anscestors that\nhaven't been processed in the current iteration. It seems a bit too\ncomplex to sort analyze relations in that order. The latter would be\nrelatively simple. See the attached for how it looks like.\n\nAnyway, either way we take, it is not pgstat.c's responsibility to do\nthat since the former need to heavily reliant to what analyze does,\nand the latter need to know what anlyze is doing.\n\n\n> > Also, you called SearchSysCacheCopy1, but didn't free the tuple. I don't think\n> > you need to copy it anyway - just call ReleaseSysCache().\n> >\n> Fixed it.\n\nMmm. Unfortunately, that fix leaks cache reference when\n!RELKIND_HAS_STORAGE.\n\n> > Regarding the counters in pg_stat_all_tables: maybe some of these should be\n> > null rather than zero ? Or else you should make an 0001 patch to fully\n> > implement this view, with all relevant counters, not just n_mod_since_analyze,\n> > last_*analyze, and *analyze_count. These are specifically misleading:\n> >\n> > last_vacuum |\n> > last_autovacuum |\n> > n_ins_since_vacuum | 0\n> > vacuum_count | 0\n> > autovacuum_count | 0\n> >\n> I haven't modified this part yet, but you meant that we should set\n> null to counters\n> about vacuum because partitioned tables are not vacuumed?\n\nPerhaps bacause partitioned tables *cannot* be vacuumed. I'm not sure\nwhat is the best way here. Showing null seems reasonable but I'm not\nsure that doesn't break anything.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 10 Nov 2020 20:35:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "I looked at both Yuzuko Hosoya's patch and Kyotaro Horiguchi's, and\nthink we're doing things in a quite complicated manner, which perhaps\ncould be done more easily.\n\nHosoya's patch has pgstat_report_analyze call pgstat_get_tab_entry() for\nthe table being vacuumed, then obtains the list of ancestors, and then\nsends for each ancestor a new message containing the partition's\nchanges_since_analyze for that ancestor. When stat collector receives\nthat message, it adds the number to the ancestor's m_changed_tuples.\n\nHoriguchi's doing a similar thing, only differently: it is do_analyze_rel \nthat reads the count from the collector (this time by calling SQL\nfunction pg_stat_get_mod_since_analyze) and then sends number back to\nthe collector for each ancestor.\n\n\nI suggest that a better way to do this, is to forget about the new\n\"partchanges\" message completely. Instead, let's add an array of\nancestors to the analyze message (borrowing from PgStat_MsgFuncstat).\nSomething like this:\n\n#define PGSTAT_NUM_ANCESTORENTRIES \\\n ((PGSTAT_MSG_PAYLOAD - sizeof(Oid) - sizeof(Oid) - sizeof(bool) - \\\n sizeof(bool) - sizeof(TimestampTz) - sizeof(PgStat_Counter) - \\\n sizeof(PgStat_Counter) - sizeof(int)) / sizeof(Oid))\ntypedef struct PgStat_MsgAnalyze\n{\n PgStat_MsgHdr m_hdr;\n Oid m_databaseid;\n Oid m_tableoid;\n bool m_autovacuum;\n bool m_resetcounter;\n TimestampTz m_analyzetime;\n PgStat_Counter m_live_tuples;\n PgStat_Counter m_dead_tuples;\n int m_nancestors;\n Oid m_ancestors[PGSTAT_NUM_ANCESTORENTRIES];\n} PgStat_MsgAnalyze;\n\nFor non-partitions, m_nancestors would be 0, so the message would be\nhandled as today. For partitions, the array carries the OID of all\nancestors. When the collector receives this message, first it looks up\nthe pgstat entries for each ancestors in the array, and it adds the\npartition's current changes_since_analyze to the ancestor's\nchanges_since_analyze. Then it does things as currently, including\nreset the changes_since_analyze counter for the partition.\n\nKey point in this is that we don't need to read the number from\ncollector into the backend executing analyze. We just *send* the data\nabout ancestors, and the collector knows what to do with it.\n\n\nOne possible complaint is: what if there are more ancestors that fit in\nthe message? I propose that this problem can be ignored, since in order\nto hit this, you'd need to have (1000-8-4-4-1-1-8-8-8-4)/4 = 238\nancestors (if my math is right). I doubt we'll hit the need to use 238\nlevels of partitionings before a stat collector rewrite occurs ...\n\n(It is possible to remove that restriction by doing more complicated\nthings such as sending the list of ancestor in a new type of message\nthat can be sent several times, prior to the analyze message itself, but\nI don't think this is worth the trouble.)\n\n\n",
"msg_date": "Mon, 30 Nov 2020 13:07:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2020-Nov-10, Kyotaro Horiguchi wrote:\n\n> In second thought about the reason for the \"toprel_oid\". It is perhaps\n> to avoid \"wrongly\" propagated values to ancestors after a manual\n> ANALYZE on a partitioned table. But the same happens after an\n> autoanalyze iteration if some of the ancestors of a leaf relation are\n> analyzed before the leaf relation in a autoanalyze iteration. That\n> can trigger an unnecessary analyzing for some of the ancestors.\n\nI'm not sure I understand this point. I think we should only trigger\nthis on analyzes of *leaf* partitions, not intermediate partitioned\nrelations. That way you would never get these unnecesary analyzes.\nAm I missing something?\n\n(So with my proposal in the previous email, we would send the list of\nancestor relations after analyzing a leaf partition. Whenever we\nanalyze a non-leaf, then the list of ancestors is sent as an empty\nlist.)\n\n> > > Regarding the counters in pg_stat_all_tables: maybe some of these should be\n> > > null rather than zero ? Or else you should make an 0001 patch to fully\n> > > implement this view, with all relevant counters, not just n_mod_since_analyze,\n> > > last_*analyze, and *analyze_count. These are specifically misleading:\n> > >\n> > > last_vacuum |\n> > > last_autovacuum |\n> > > n_ins_since_vacuum | 0\n> > > vacuum_count | 0\n> > > autovacuum_count | 0\n> > >\n> > I haven't modified this part yet, but you meant that we should set\n> > null to counters\n> > about vacuum because partitioned tables are not vacuumed?\n> \n> Perhaps bacause partitioned tables *cannot* be vacuumed. I'm not sure\n> what is the best way here. Showing null seems reasonable but I'm not\n> sure that doesn't break anything.\n\nI agree that showing NULLs for the vacuum columns is better. Perhaps\nthe most reasonable way to do this is use -1 as an indicator that NULL\nought to be returned from pg_stat_get_vacuum_count() et al, and add a\nboolean in PgStat_TableCounts next to t_truncated, maybe \"t_nullvacuum\"\nthat says to store -1 instead of 0 in pgstat_recv_tabstat.\n\n\n",
"msg_date": "Mon, 30 Nov 2020 14:57:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hello Alvaro,\n\nThank you for your comments.\n\n>\n> > In second thought about the reason for the \"toprel_oid\". It is perhaps\n> > to avoid \"wrongly\" propagated values to ancestors after a manual\n> > ANALYZE on a partitioned table. But the same happens after an\n> > autoanalyze iteration if some of the ancestors of a leaf relation are\n> > analyzed before the leaf relation in a autoanalyze iteration. That\n> > can trigger an unnecessary analyzing for some of the ancestors.\n>\n> I'm not sure I understand this point. I think we should only trigger\n> this on analyzes of *leaf* partitions, not intermediate partitioned\n> relations. That way you would never get these unnecesary analyzes.\n> Am I missing something?\n>\n> (So with my proposal in the previous email, we would send the list of\n> ancestor relations after analyzing a leaf partition. Whenever we\n> analyze a non-leaf, then the list of ancestors is sent as an empty\n> list.)\n>\nThe problem Horiguchi-san mentioned is as follows:\n\ncreate table p1 (i int) partition by range(i);\ncreate table p1_1 partition of p1 for values from (0) to (500)\npartition by range(i);\ncreate table p1_1_1 partition of p1_1 for values from (0) to (300);\ninsert into p1 select generate_series(0,299);\n\n-- After auto analyze (first time)\npostgres=# select relname, n_mod_since_analyze, last_autoanalyze from\npg_stat_all_tables where relname in ('p1','p1_1','p1_1_1');\n relname | n_mod_since_analyze | last_autoanalyze\n---------+---------------------+-------------------------------\n p1 | 300 |\n p1_1 | 300 |\n p1_1_1 | 0 | 2020-12-02 22:24:18.753574+09\n(3 rows)\n\n-- Insert more rows\npostgres=# insert into p1 select generate_series(0,199);\npostgres=# select relname, n_mod_since_analyze, last_autoanalyze from\npg_stat_all_tables where relname in ('p1','p1_1','p1_1_1');\n relname | n_mod_since_analyze | last_autoanalyze\n---------+---------------------+-------------------------------\n p1 | 300 |\n p1_1 | 300 |\n p1_1_1 | 200 | 2020-12-02 22:24:18.753574+09\n(3 rows)\n\n-- After auto analyze (second time)\npostgres=# select relname, n_mod_since_analyze, last_autoanalyze from\npg_stat_all_tables where relname in ('p1','p1_1','p1_1_1');\nrelname | n_mod_since_analyze | last_autoanalyze\n---------+---------------------+-------------------------------\n p1 | 0 | 2020-12-02 22:25:18.857248+09\n p1_1 | 200 | 2020-12-02 22:25:18.661932+09\n p1_1_1 | 0 | 2020-12-02 22:25:18.792078+09\n\nAfter 2nd auto analyze, all relations' n_mod_since_analyze should be 0,\nbut p1_1's is not. This is because p1_1 was analyzed before p1_1_1.\nSo p1_1 will be analyzed again in the 3rd auto analyze.\nThat is propagating changes_since_analyze to *all ancestors* may cause\nunnecessary analyzes even if we do this only when analyzing leaf partitions.\nSo I think we should track ancestors which are not analyzed in the current\niteration as Horiguchi-san proposed.\n\nRegarding your idea:\n> typedef struct PgStat_MsgAnalyze\n> {\n> PgStat_MsgHdr m_hdr;\n> Oid m_databaseid;\n> Oid m_tableoid;\n> bool m_autovacuum;\n> bool m_resetcounter;\n> TimestampTz m_analyzetime;\n> PgStat_Counter m_live_tuples;\n> PgStat_Counter m_dead_tuples;\n> int m_nancestors;\n> Oid m_ancestors[PGSTAT_NUM_ANCESTORENTRIES];\n> } PgStat_MsgAnalyze;\n\nI'm not sure but how about storing only ancestors that aren't analyzed\nin the current\niteration in m_ancestors[PGSTAT_NUM_ANCESTORENTRIES] ?\n\n\n> > > > Regarding the counters in pg_stat_all_tables: maybe some of these should be\n> > > > null rather than zero ? Or else you should make an 0001 patch to fully\n> > > > implement this view, with all relevant counters, not just n_mod_since_analyze,\n> > > > last_*analyze, and *analyze_count. These are specifically misleading:\n> > > >\n> > > > last_vacuum |\n> > > > last_autovacuum |\n> > > > n_ins_since_vacuum | 0\n> > > > vacuum_count | 0\n> > > > autovacuum_count | 0\n> > > >\n> > > I haven't modified this part yet, but you meant that we should set\n> > > null to counters\n> > > about vacuum because partitioned tables are not vacuumed?\n> >\n> > Perhaps bacause partitioned tables *cannot* be vacuumed. I'm not sure\n> > what is the best way here. Showing null seems reasonable but I'm not\n> > sure that doesn't break anything.\n>\n> I agree that showing NULLs for the vacuum columns is better. Perhaps\n> the most reasonable way to do this is use -1 as an indicator that NULL\n> ought to be returned from pg_stat_get_vacuum_count() et al, and add a\n> boolean in PgStat_TableCounts next to t_truncated, maybe \"t_nullvacuum\"\n> that says to store -1 instead of 0 in pgstat_recv_tabstat.\n>\nThank you for the advice. I'll fix it based on this idea.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 2 Dec 2020 23:11:16 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hello Yuzuko,\n\nOn 2020-Dec-02, yuzuko wrote:\n\n> The problem Horiguchi-san mentioned is as follows:\n> [explanation]\n\nHmm, I see. So the problem is that if some ancestor is analyzed first,\nthen analyze of one of its partition will cause a redundant analyze of\nthe ancestor, because the number of tuples that is propagated from the\npartition represents a set that had already been included in the\nancestor's analysis.\n\nIf the problem was just that, then I think it would be very simple to\nsolve: just make sure to sort the tables to vacuum so that all leaves\nare vacuumed first, and then all ancestors, sorted from the bottom up.\nProblem solved.\n\nBut I'm not sure that that's the whole story, for two reasons: one, two\nworkers can run simultaneously, where one analyzes the partition and the\nother analyzes the ancestor. Then the order is not guaranteed (and\neach process will get no effect from remembering whether it did that one\nor not). Second, manual analyzes can occur in any order.\n\nMaybe it's more useful to think about this in terms of rememebering that\npartition P had changed_tuples set to N when we analyzed ancestor A.\nThen, when we analyze partition P, we send the message listing A as\nancestor; on receipt of that message, we see M+N changed tuples in P,\nbut we know that we had already seen N, so we only record M.\n\nI'm not sure how to implement this idea however, since on analyze of\nancestor A we don't have the list of partitions, so we can't know the N\nfor each partition.\n\n\n\n",
"msg_date": "Thu, 3 Dec 2020 10:28:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hello Alvaro,\n\nOn Thu, Dec 3, 2020 at 10:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Hello Yuzuko,\n>\n> On 2020-Dec-02, yuzuko wrote:\n>\n> > The problem Horiguchi-san mentioned is as follows:\n> > [explanation]\n>\n> Hmm, I see. So the problem is that if some ancestor is analyzed first,\n> then analyze of one of its partition will cause a redundant analyze of\n> the ancestor, because the number of tuples that is propagated from the\n> partition represents a set that had already been included in the\n> ancestor's analysis.\n>\n> If the problem was just that, then I think it would be very simple to\n> solve: just make sure to sort the tables to vacuum so that all leaves\n> are vacuumed first, and then all ancestors, sorted from the bottom up.\n> Problem solved.\n>\n\nIndeed. When discussed with Horiguchi-san before, He mentioned\nthe same way:\n> So, to propagate the count properly, we need to analyze relations\n> leaf-to-root order, or propagate the counter only to anscestors that\n> haven't been processed in the current iteration. It seems a bit too\n> complex to sort analyze relations in that order.\n\nbut we didn't select it because of its complexity as you also said.\n\n> But I'm not sure that that's the whole story, for two reasons: one, two\n> workers can run simultaneously, where one analyzes the partition and the\n> other analyzes the ancestor. Then the order is not guaranteed (and\n> each process will get no effect from remembering whether it did that one\n> or not). Second, manual analyzes can occur in any order.\n>\n> Maybe it's more useful to think about this in terms of rememebering that\n> partition P had changed_tuples set to N when we analyzed ancestor A.\n> Then, when we analyze partition P, we send the message listing A as\n> ancestor; on receipt of that message, we see M+N changed tuples in P,\n> but we know that we had already seen N, so we only record M.\n>\n> I'm not sure how to implement this idea however, since on analyze of\n> ancestor A we don't have the list of partitions, so we can't know the N\n> for each partition.\n>\nI thought about it for a while, but I can't come up with how to implement it.\nAnd also I think the other way Horiguchi-san suggested in [1] would be\nmore simple to solve the problem we are facing.\n\nAttach the new patch based on his patch. What do you think?\n\n[1] https://www.postgresql.org/message-id/20201110.203557.1420746510378864931.horikyota.ntt%40gmail.com\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Tue, 15 Dec 2020 10:46:26 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 12/14/20 8:46 PM, yuzuko wrote:\n> \n> On Thu, Dec 3, 2020 at 10:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> Attach the new patch based on his patch. What do you think?\n\nÁlvaro, Justin, Kyotaro, thoughts on this latest patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 11 Mar 2021 11:06:36 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nI took a look at this patch. It does not apply because of 5f8727f5a67,\nso a rebase is needed. But I want to talk about the general approach in\ngeneral, so it does not matter.\n\nThe thread is fairly long, both in terms of number of messages and time\n(started in 2019), so let me restate my understanding of the problem and\nwhat the patch aims to do.\n\nThe problem is that autovacuum never analyzes non-leaf relations in\npartition hierarchies, because they never get modified and so the value\nof changes_since_analyze remains 0. This applies both to partitioning\nbased on inheritance and the new fancy declarative partitioning. The\nconsequence is that we never have accurate statistics (MCV, histograms\nand so on) for the parent, which may lead to poor query plans in cases\nwhen we don't use the child statistics for some reason.\n\nThe patch aims for fix that by propagating the changes_since_analyze to\nthe parent relations, so that the autovacuum properly considers if those\nnon-leaf relations need analyze.\n\nI think the goal is right, and propagating the changes_since_analyze is\nthe right solution, but as coded it has a couple issues that may cause\ntrouble in practice.\n\n\nFirstly, the patch propagates the changes_since_analyze values from\ndo_analyze_rel, i.e. from the worker after it analyzes the relation.\nThat may easily lead to cases with unnecessary analyzes - consider a\npartitioned with 4 child relations:\n\n p1 [reltuples=1M, changes_since_analyze=400k]\n p2 [reltuples=1M, changes_since_analyze=90k]\n p3 [reltuples=1M, changes_since_analyze=90k]\n p4 [reltuples=1M, changes_since_analyze=90k]\n\nWith the default analyze threshold (10%) this means autoanalyze of p1,\nand then (in the next round) analyze of the whole partitioned table,\nbecause 400k is 10% of 4M. So far so good - we're now in this state:\n\n p1 [reltuples=1M, changes_since_analyze=0]\n p2 [reltuples=1M, changes_since_analyze=90k]\n p3 [reltuples=1M, changes_since_analyze=90k]\n p4 [reltuples=1M, changes_since_analyze=90k]\n\nLet's do ~310k more modifications to p2:\n\n p1 [reltuples=1M, changes_since_analyze=0]\n p2 [reltuples=1M, changes_since_analyze=400k]\n p3 [reltuples=1M, changes_since_analyze=90k]\n p4 [reltuples=1M, changes_since_analyze=90k]\n\nNow p2 gets analyzed, and the value gets propagate to p, triggering the\nanalyze. But that's bogus - we've already seen 90k of those rows in the\nlast analyze, the \"actual\" changes_since_analyze is just 310k and that\nshould have not triggered the analyze.\n\nI could invent a much more extreme examples with more partitions, and or\nwith multiple autovacuum workers processing the leaf rels concurrently.\n\nThis seems like a quite serious issue, because analyzes on partitioned\ntables sample all the partitions, which seems rather expensive. That is\nnot an issue introduced by this patch, of course, but it's good to keep\nthat in mind and not make the matters worse.\n\nNote: I do have some ideas about how to improve that, I've started a\nseparate thread about it [1].\n\n\nIMHO the primary issue is the patch is trying to report the counts from\nthe workers, and it's done incrementally, after the fact. It tries to\nprevent the issue by not propagating the counts to parents analyzed in\nthe same round, but that doesn't seems sufficient:\n\n- There's no guarantee how long ago the parent was analyzed. Maybe it\nwas 1 second ago, or maybe it was 24h ago and there have been many new\nmodifications since then?\n\n- The hash table is per worker, so who knows what did the other\nautovacuum workers do?\n\nSo not really a good solution, I'm afraid.\n\n\nI propose a different approach - instead of propagating the counts in\ndo_analyze_rel for individual leaf tables, let's do that in bulk in\nrelation_needs_vacanalyze. Before the (existing) first pass over\npg_class, we can add a new one, propagating counts from leaf tables to\nparents. I'd imagine something like this\n\n while ((tuple = heap_getnext(relScan, ... != NULL)\n {\n Form_pg_class classForm = (Form_pg_class) GETSTRUCT(tuple);\n\n ... find all ancestors for classForm ...\n\n pgstat_propagate_changes(classForm, ancestors);\n }\n\nThe pgstat_propagate_changes() simply informs the pgstat collector that\nclassForm has certain ancestors, and it propagates the value to all of\nthem. There's a problem, though - we can't reset the value for the leaf\ntable, because we need to check if it needs analyze, but we also don't\nwant to sent it again next time. But we can add another counter,\ntracking that part of changes_since_analyze we already propagated, and\npropagate only the difference. That is, we now have this:\n\n PgStat_Counter changes_since_analyze;\n PgStat_Counter changes_since_analyze_reported;\n\nSo for example we start with\n\n changes_since_analyze = 10000;\n changes_since_analyze_reported = 0;\n\nand we propagate 10k to parents:\n\n changes_since_analyze = 10000;\n changes_since_analyze_reported = 10000;\n\nbut we don't analyze anything, and we accumulate 5k more changes:\n\n changes_since_analyze = 15000;\n changes_since_analyze_reported = 10000;\n\nso now we propagate only the 5k delta. And so on. It's not exactly\natomic change (we still do this per relation), but it's \"bulk\" in the\nsense that we force the propagation and don't wait until after the leaf\nhappens to be analyzed.\n\nIt might need to reread the stats file I think, to get the incremented\nvalues, but that seems acceptable.\n\nWe may need to \"sync\" the counts for individual relations in a couple\nplaces (e.g. after the worker is done with the leaf, it should propagate\nthe remaining delta before resetting the values to 0). Maybe multi-level\npartitioning needs some additional handling, not sure.\n\n\n\nregards\n\n\n[1] https://commitfest.postgresql.org/33/3052/\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Mar 2021 04:09:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "\n\nOn 3/30/21 4:09 AM, Tomas Vondra wrote:\n> Hi,\n> \n> ...\n> \n> We may need to \"sync\" the counts for individual relations in a couple\n> places (e.g. after the worker is done with the leaf, it should propagate\n> the remaining delta before resetting the values to 0). Maybe multi-level\n> partitioning needs some additional handling, not sure.\n> \n\nI forgot to mention one additional thing yesterday - I wonder if we need\nto do something similar after a partition is attached/detached. That can\nalso change the parent's statistics significantly, so maybe we should\nhandle all partition's rows as changes_since_analyze? Not necessarily\nsomething this patch has to handle, but might be related.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Mar 2021 17:40:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi Tomas,\n\nThank you for reviewing the patch.\n\n> Firstly, the patch propagates the changes_since_analyze values from\n> do_analyze_rel, i.e. from the worker after it analyzes the relation.\n> That may easily lead to cases with unnecessary analyzes - consider a\n> partitioned with 4 child relations:\n> [ explanation ]\n>\nI didn't realize that till now. Indeed, this approach increments parent's\nchanges_since_analyze counter according to its leaf partition's counter\nwhen the leaf partition is analyzed, so it will cause unnecessary ANALYZE\non partitioned tables as you described.\n\n\n> I propose a different approach - instead of propagating the counts in\n> do_analyze_rel for individual leaf tables, let's do that in bulk in\n> relation_needs_vacanalyze. Before the (existing) first pass over\n> pg_class, we can add a new one, propagating counts from leaf tables to\n> parents.\n>\nThank you for your suggestion. I think it could solve all the issues\nyou mentioned. I modified the patch based on this approach:\n\n- Create a new counter, PgStat_Counter changes_since_analyze_reported,\n to track changes_since_analyze we already propagated to ancestors.\n This is used for internal processing and users may not need to know it.\n So this counter is not displayed at pg_stat_all_tables view for now.\n\n- Create a new function, pgstat_propagate_changes() which propagates\n changes_since_analyze counter to all ancestors and saves\n changes_since_analyze_reported. This function is called in\n do_autovacuum() before relation_needs_vacanalyze().\n\n\n> Note: I do have some ideas about how to improve that, I've started a\n> separate thread about it [1].\n>\nI'm also interested in merging children's statistics for partitioned tables\nbecause it will make ANALYZE on inheritance trees more efficient.\nSo I'll check it later.\n\n> I forgot to mention one additional thing yesterday - I wonder if we need\n> to do something similar after a partition is attached/detached. That can\n> also change the parent's statistics significantly, so maybe we should\n> handle all partition's rows as changes_since_analyze? Not necessarily\n> something this patch has to handle, but might be related.\n>\nRegarding attached/detached partitions, I think we should update statistics\nof partitioned tables according to the new inheritance tree. The latest patch\nhasn't handled this case yet, but I'll give it a try soon.\n\nAttach the v13 patch to this email. Could you please check it again?\n\n--\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Thu, 1 Apr 2021 20:34:53 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Thanks for the quick rework. I like this design much better and I think\nthis is pretty close to committable. Here's a rebased copy with some\nsmall cleanups (most notably, avoid calling pgstat_propagate_changes\nwhen the partition doesn't have a tabstat entry; also, free the lists\nthat are allocated in a couple of places).\n\nI didn't actually verify that it works.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un gui�n que no toque nada para no causar da�os.\" (Jakob Nielsen)",
"msg_date": "Sat, 3 Apr 2021 16:42:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 4/3/21 9:42 PM, Alvaro Herrera wrote:\n> Thanks for the quick rework. I like this design much better and I think\n> this is pretty close to committable. Here's a rebased copy with some\n> small cleanups (most notably, avoid calling pgstat_propagate_changes\n> when the partition doesn't have a tabstat entry; also, free the lists\n> that are allocated in a couple of places).\n> \n> I didn't actually verify that it works.\n> \n\nYeah, this approach seems much simpler, I think. That being said, I\nthink there's a couple issues:\n\n1) I still don't understand why inheritance and declarative partitioning\nare treated differently. Seems unnecessary nad surprising, but maybe\nthere's a good reason?\n\n\n2) pgstat_recv_tabstat\n\nShould it really reset changes_since_analyze_reported in both branches?\nAFAICS if the \"found\" branch does this\n\n tabentry->changes_since_analyze_reported = 0;\n\nthat means we lose the counter any time tabstats are received, no?\nThat'd be wrong, because we'd propagate the changes repeatedly.\n\n\n3) pgstat_recv_analyze\n\nShouldn't it propagate the counters before resetting them? I understand\nthat for the just-analyzed relation we can't do better, but why not to\npropagate the counters to parents? (Not necessarily from this place in\nthe stat collector, maybe the analyze process should do that.)\n\n\n4) pgstat_recv_reportedchanges\n\nI think this needs to be more careful when updating the value - the\nstats collector might have received other messages modifying those\ncounters (including e.g. PGSTAT_MTYPE_ANALYZE with a reset), so maybe we\ncan get into situation with\n\n (changes_since_analyze_reported > changes_since_analyze)\n\nif we just blindly increment the value. I'd bet would lead to funny\nstuff. So maybe this should be careful to never exceed this?\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 4 Apr 2021 21:08:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-04, Tomas Vondra wrote:\n\n> 1) I still don't understand why inheritance and declarative partitioning\n> are treated differently. Seems unnecessary nad surprising, but maybe\n> there's a good reason?\n\nI suppose the rationale is that for inheritance we have always done it\nthat way -- similar things have been done that way for inheritance\nhistorically, to avoid messing with long-standing behavior. We have\ndone that in a bunch of places -- DDL behavior, FKs, etc. Maybe in this\ncase it's not justified. It *will* change behavior, in the sense that\nwe are going to capture stats that have never been captured before.\nThat might or might not affect query plans for designs using regular\ninheritance. But it seems reasonable to think that those changes will\nbe for the good; and if it does break plans for some people and they\nwant to revert to the original behavior, they can just set\nautovacuum_enabled to off for the parent tables.\n\nSo, I agree that we should enable this new feature for inheritance\nparents too.\n\n\nI can't comment on the other issues. I hope to give this a closer look\ntomorrow my time; with luck Hosoya-san will have commented by then.\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La rebeld�a es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Sun, 4 Apr 2021 16:05:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "\n\nOn 4/4/21 10:05 PM, Alvaro Herrera wrote:\n> On 2021-Apr-04, Tomas Vondra wrote:\n> \n>> 1) I still don't understand why inheritance and declarative partitioning\n>> are treated differently. Seems unnecessary nad surprising, but maybe\n>> there's a good reason?\n> \n> I suppose the rationale is that for inheritance we have always done it\n> that way -- similar things have been done that way for inheritance\n> historically, to avoid messing with long-standing behavior. We have\n> done that in a bunch of places -- DDL behavior, FKs, etc. Maybe in this\n> case it's not justified. It *will* change behavior, in the sense that\n> we are going to capture stats that have never been captured before.\n> That might or might not affect query plans for designs using regular\n> inheritance. But it seems reasonable to think that those changes will\n> be for the good; and if it does break plans for some people and they\n> want to revert to the original behavior, they can just set\n> autovacuum_enabled to off for the parent tables.\n> \n> So, I agree that we should enable this new feature for inheritance\n> parents too.\n> \n\nNot sure. AFAICS the missing stats on parents are an issue both for\ninheritance and partitioning. Maybe there is a reason to maintain the\ncurrent behavior with inheritance, but I don't see it.\n\nWith the other features, I think the reason for not implementing that\nfor inheritance was that it'd be more complex, compared to declarative\npartitioning (which has stricter limitations on the partitions, etc.).\nBut in this case I think there's no difference in complexity, the same\ncode can handle both cases.\n\nIn fact, one of the first posts in this threads links to this:\n\nhttps://www.postgresql.org/message-id/4823.1262132964%40sss.pgh.pa.us\n\ni.e. Tom actually proposed doing something like this back in 2009, so\npresumably he though it's desirable back then.\n\nOTOH he argued against adding another per-table counter and proposed\nessentially what the patch did before, i.e. propagating the counter\nafter analyze. But we know that may trigger analyze too often ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 4 Apr 2021 23:29:27 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-04, Tomas Vondra wrote:\n\n> In fact, one of the first posts in this threads links to this:\n> \n> https://www.postgresql.org/message-id/4823.1262132964%40sss.pgh.pa.us\n> \n> i.e. Tom actually proposed doing something like this back in 2009, so\n> presumably he though it's desirable back then.\n> \n> OTOH he argued against adding another per-table counter and proposed\n> essentially what the patch did before, i.e. propagating the counter\n> after analyze. But we know that may trigger analyze too often ...\n\nYeah, I think that's a doomed approach. The reason to avoid another\ncolumn is to avoid bloat, which is good but if we end up with an\nunworkable design then we know we have to backtrack on it.\n\nI was thinking that we could get away with having a separate pgstat\nstruct for partitioned tables, to avoid enlarging the struct for all\ntables, but if we're going to also include legacy inheritance in the\nfeature clearly that doesn't work.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php\n\n\n",
"msg_date": "Sun, 4 Apr 2021 18:51:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "\n\nOn 4/4/21 9:08 PM, Tomas Vondra wrote:\n> On 4/3/21 9:42 PM, Alvaro Herrera wrote:\n>> Thanks for the quick rework. I like this design much better and I think\n>> this is pretty close to committable. Here's a rebased copy with some\n>> small cleanups (most notably, avoid calling pgstat_propagate_changes\n>> when the partition doesn't have a tabstat entry; also, free the lists\n>> that are allocated in a couple of places).\n>>\n>> I didn't actually verify that it works.\n>>> ...\n> \n> 3) pgstat_recv_analyze\n> \n> Shouldn't it propagate the counters before resetting them? I understand\n> that for the just-analyzed relation we can't do better, but why not to\n> propagate the counters to parents? (Not necessarily from this place in\n> the stat collector, maybe the analyze process should do that.)\n> \n\nFWIW the scenario I had in mind is something like this:\n\ncreate table t (a int, b int) partition by hash (a);\ncreate table p0 partition of t for values with (modulus 2, remainder 0);\ncreate table p1 partition of t for values with (modulus 2, remainder 1);\n\ninsert into t select i, i from generate_series(1,1000000) s(i);\n\nselect relname, n_mod_since_analyze from pg_stat_user_tables;\n\ntest=# select relname, n_mod_since_analyze from pg_stat_user_tables;\n relname | n_mod_since_analyze\n---------+---------------------\n t | 0\n p0 | 499375\n p1 | 500625\n(3 rows)\n\ntest=# analyze p0, p1;\nANALYZE\ntest=# select relname, n_mod_since_analyze from pg_stat_user_tables;\n relname | n_mod_since_analyze\n---------+---------------------\n t | 0\n p0 | 0\n p1 | 0\n(3 rows)\n\nThis may seem a bit silly - who would analyze the hash partitions\ndirectly? However, with other partitioning schemes (list, range) it's\nquite plausible that people load data directly into partition. They can\nanalyze the parent explicitly too, but with multi-level partitioning\nthat probably requires analyzing all the ancestors.\n\nThe other possible scenario is about rows inserted while p0/p1 are being\nprocessed by autoanalyze. That may actually take quite a bit of time,\ndepending on vacuum cost limit. So I still think we should propagate the\ndelta after the analyze, before we reset the counters.\n\n\nI also realized relation_needs_vacanalyze is not really doing what I\nsuggested - it propagates the counts, but does so in the existing loop\nwhich checks which relations need vacuum/analyze.\n\nThat means we may skip the parent table in the *current* round, because\nit'll see the old (not yet updated) counts. It's likely to be processed\nin the next autovacuum round, but that may actually not happen. The\ntrouble is the reltuples for the parent is calculated using *current*\nchild reltuples values, but we're comparing it to the *old* value of\nchanges_since_analyze. So e.g. if enough rows were inserted into the\npartitions, it may still be below the analyze threshold.\n\nWhat I proposed is adding a separate loop that *only* propagates the\ncounts, and then re-read the current stats (perhaps only if we actually\npropagated anything). And then decide which relations need analyze.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Apr 2021 02:44:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-04, Tomas Vondra wrote:\n\n> 1) I still don't understand why inheritance and declarative partitioning\n> are treated differently. Seems unnecessary nad surprising, but maybe\n> there's a good reason?\n\nI think there is a good reason to treat them the same: pgstat does not\nhave a provision to keep stats both of the table with children, and the\ntable without children. It can only have one of those. For\npartitioning that doesn't matter: since the table-without-children\ndoesn't have anything on its own (no scans, no tuples, no nothing) then\nwe can just use the entry to store the table-with-children data. But\nfor the inheritance case, the parent can have its own tuples and counts\nits own scans and so on; so if we change things, we'll overwrite the\nstats. Maybe in the long-term we should allow pgstat to differentiate\nthose cases, but that seems not in scope for this patch.\n\nI'm working on the code to fix the other issues.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Tue, 6 Apr 2021 16:56:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-06 16:56:49 -0400, Alvaro Herrera wrote:\n> I think there is a good reason to treat them the same: pgstat does not\n> have a provision to keep stats both of the table with children, and the\n> table without children. It can only have one of those. For\n> partitioning that doesn't matter: since the table-without-children\n> doesn't have anything on its own (no scans, no tuples, no nothing) then\n> we can just use the entry to store the table-with-children data. But\n> for the inheritance case, the parent can have its own tuples and counts\n> its own scans and so on; so if we change things, we'll overwrite the\n> stats. Maybe in the long-term we should allow pgstat to differentiate\n> those cases, but that seems not in scope for this patch.\n\nFWIW, I think it shouldn't be too hard to do that once the shared memory\nstats patch goes in (not 14, unfortunately). The hardest part will be to\navoid exploding the number of interface functions, but I think we can\nfigure out a way to deal with that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Apr 2021 16:58:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hello,\n\nThank you for reviewing.\nI'm working on fixing the patch according to the comments.\nI'll send it as soon as I can.\n\n> On 2021-04-06 16:56:49 -0400, Alvaro Herrera wrote:\n> > I think there is a good reason to treat them the same: pgstat does not\n> > have a provision to keep stats both of the table with children, and the\n> > table without children. It can only have one of those. For\n> > partitioning that doesn't matter: since the table-without-children\n> > doesn't have anything on its own (no scans, no tuples, no nothing) then\n> > we can just use the entry to store the table-with-children data. But\n> > for the inheritance case, the parent can have its own tuples and counts\n> > its own scans and so on; so if we change things, we'll overwrite the\n> > stats. Maybe in the long-term we should allow pgstat to differentiate\n> > those cases, but that seems not in scope for this patch.\n>\n> FWIW, I think it shouldn't be too hard to do that once the shared memory\n> stats patch goes in (not 14, unfortunately). The hardest part will be to\n> avoid exploding the number of interface functions, but I think we can\n> figure out a way to deal with that.\n>\nI've been thinking about traditional inheritance, I realized that we\nneed additional\nhandling to support them because unlike declarative partitioning,\nparents may have\nsome rows in the case of traditional inheritance as Alvaro mentioned.\nSo I think we should support only declarative partitioning in this\npatch for now,\nbut what do you think? I'm not sure but if we can solve this matter\nat low cost by\nusing the shared memory stats patch, should we wait for the patch?\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 7 Apr 2021 12:13:13 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-07, yuzuko wrote:\n\n> I'm working on fixing the patch according to the comments.\n> I'll send it as soon as I can.\n\nThanks, I've been giving it a look too.\n\n> I've been thinking about traditional inheritance, I realized that we\n> need additional\n> handling to support them because unlike declarative partitioning,\n> parents may have\n> some rows in the case of traditional inheritance as Alvaro mentioned.\n> So I think we should support only declarative partitioning in this\n> patch for now,\n> but what do you think?\n\nYeah, not fixable at present I think.\n\n> I'm not sure but if we can solve this matter at low cost by using the\n> shared memory stats patch, should we wait for the patch?\n\nLet's do that for 15.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n\n\n",
"msg_date": "Tue, 6 Apr 2021 23:33:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nI fixed the patch according to the following comments.\nAttach the latest patch. It is based on v14 patch Alvaro attached before.\n\nOn Mon, Apr 5, 2021 at 4:08 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/3/21 9:42 PM, Alvaro Herrera wrote:\n> > Thanks for the quick rework. I like this design much better and I think\n> > this is pretty close to committable. Here's a rebased copy with some\n> > small cleanups (most notably, avoid calling pgstat_propagate_changes\n> > when the partition doesn't have a tabstat entry; also, free the lists\n> > that are allocated in a couple of places).\n> >\n> > I didn't actually verify that it works.\n> >\n>\n> Yeah, this approach seems much simpler, I think. That being said, I\n> think there's a couple issues:\n>\n> 1) I still don't understand why inheritance and declarative partitioning\n> are treated differently. Seems unnecessary nad surprising, but maybe\n> there's a good reason?\n>\nAs we discussed in this thread, this patch should handle only declarative\npartitioning for now.\n\n>\n> 2) pgstat_recv_tabstat\n>\n> Should it really reset changes_since_analyze_reported in both branches?\n> AFAICS if the \"found\" branch does this\n>\n> tabentry->changes_since_analyze_reported = 0;\n>\n> that means we lose the counter any time tabstats are received, no?\n> That'd be wrong, because we'd propagate the changes repeatedly.\n>\nI changed the changes_since_analyze_reported counter not to reset.\n\n>\n> 3) pgstat_recv_analyze\n>\n> Shouldn't it propagate the counters before resetting them? I understand\n> that for the just-analyzed relation we can't do better, but why not to\n> propagate the counters to parents? (Not necessarily from this place in\n> the stat collector, maybe the analyze process should do that.)\n>\nI realized that we should propagate the counters for manual ANALYZE too.\nthanks to the examples you offered in another email.\nI fixed that for manual ANALYZE.\n\n>\n> 4) pgstat_recv_reportedchanges\n>\n> I think this needs to be more careful when updating the value - the\n> stats collector might have received other messages modifying those\n> counters (including e.g. PGSTAT_MTYPE_ANALYZE with a reset), so maybe we\n> can get into situation with\n>\n> (changes_since_analyze_reported > changes_since_analyze)\n>\n> if we just blindly increment the value. I'd bet would lead to funny\n> stuff. So maybe this should be careful to never exceed this?\n>\npgstat_propagate_changes() calls pgstat_report_reportedchanges()\nonly if (changes_since_analyze_reported < changes_since_analyze).\nSo I think we won't get into the such situation\n> (changes_since_analyze_reported > changes_since_analyze)\nbut am I missing something?\n\n> I also realized relation_needs_vacanalyze is not really doing what I\n> suggested - it propagates the counts, but does so in the existing loop\n> which checks which relations need vacuum/analyze.\n>\n> That means we may skip the parent table in the *current* round, because\n> it'll see the old (not yet updated) counts. It's likely to be processed\n> in the next autovacuum round, but that may actually not happen. The\n> trouble is the reltuples for the parent is calculated using *current*\n> child reltuples values, but we're comparing it to the *old* value of\n> changes_since_analyze. So e.g. if enough rows were inserted into the\n> partitions, it may still be below the analyze threshold.\n>\nIndeed, the partitioned table was not analyzed at the same timing as\nits leaf partitions due to the delay of propagating counters. According\nto your proposal, I added a separate loop to propagate the counters\nbefore collecting a list of relations to vacuum/analyze.\n\n--\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Thu, 8 Apr 2021 00:39:16 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "OK, I bit the bullet and re-did the logic in the way I had proposed\nearlier in the thread: do the propagation on the collector's side, by\nsending only the list of ancestors: the collector can read the tuple\nchange count by itself, to add it to each ancestor. This seems less\nwasteful. Attached is v16 which does it that way and seems to work\nnicely under my testing.\n\nHowever, I just noticed there is a huge problem, which is that the new\ncode in relation_needs_vacanalyze() is doing find_all_inheritors(), and\nwe don't necessarily have a snapshot that lets us do that. While adding\na snapshot acquisition at that spot is a very easy fix, I hesitate to\nfix it that way, because the whole idea there seems quite wasteful: we\nhave to look up, open and lock every single partition, on every single\nautovacuum iteration through the database. That seems bad. I'm\ninclined to think that a better idea may be to store reltuples for the\npartitioned table in pg_class.reltuples, instead of having to add up the\nreltuples of each partition. I haven't checked if this is likely to\nbreak anything.\n\n(Also, a minor buglet: if we do ANALYZE (col1), then ANALYZE (col2) a\npartition, then we repeatedly propagate the counts to the parent table,\nso we would cause the parent to be analyzed more times than it should.\nSounds like we should not send the ancestor list when a column list is\ngiven to manual analyze. I haven't verified this, however.)\n\n-- \n�lvaro Herrera Valdivia, Chile\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.",
"msg_date": "Wed, 7 Apr 2021 23:22:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-07, Alvaro Herrera wrote:\n\n> OK, I bit the bullet and re-did the logic in the way I had proposed\n> earlier in the thread: do the propagation on the collector's side, by\n> sending only the list of ancestors: the collector can read the tuple\n> change count by itself, to add it to each ancestor. This seems less\n> wasteful. Attached is v16 which does it that way and seems to work\n> nicely under my testing.\n\nPushed with this approach. Thanks for persisting with this.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 8 Apr 2021 01:20:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-07, Alvaro Herrera wrote:\n\n> However, I just noticed there is a huge problem, which is that the new\n> code in relation_needs_vacanalyze() is doing find_all_inheritors(), and\n> we don't necessarily have a snapshot that lets us do that. While adding\n> a snapshot acquisition at that spot is a very easy fix, I hesitate to\n> fix it that way, because the whole idea there seems quite wasteful: we\n> have to look up, open and lock every single partition, on every single\n> autovacuum iteration through the database. That seems bad. I'm\n> inclined to think that a better idea may be to store reltuples for the\n> partitioned table in pg_class.reltuples, instead of having to add up the\n> reltuples of each partition. I haven't checked if this is likely to\n> break anything.\n\nI forgot to comment on this aspect. First, I was obviously mistaken\nabout there not being an active snapshot. I mean, it's correct that\nthere isn't. The issue is that it's really a bug to require that there\nis one; it just hasn't failed before because partially detached\npartitions aren't very common. So I patched that as a bug in a\npreliminary patch.\n\nNext, the idea of storing the number of tuples in pg_class.reltuples is\na nice one, and I think we should consider it in the long run. However,\nwhile it can be done as a quick job (shown in the attached, which AFAICT\nworks fine) there are side-effects -- for example, TRUNCATE doesn't\nclear the value, which is surely wrong. I suspect that if I try to\nhandle it in this way, it would blow up in some corner case I forgot to\nconsider. So, I decided not to go that way, at least for now.\n\n-- \n�lvaro Herrera Valdivia, Chile",
"msg_date": "Thu, 8 Apr 2021 02:04:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "\n\nOn 4/8/21 5:22 AM, Alvaro Herrera wrote:\n> OK, I bit the bullet and re-did the logic in the way I had proposed\n> earlier in the thread: do the propagation on the collector's side, by\n> sending only the list of ancestors: the collector can read the tuple\n> change count by itself, to add it to each ancestor. This seems less\n> wasteful. Attached is v16 which does it that way and seems to work\n> nicely under my testing.\n> \n> However, I just noticed there is a huge problem, which is that the new\n> code in relation_needs_vacanalyze() is doing find_all_inheritors(), and\n> we don't necessarily have a snapshot that lets us do that. While adding\n> a snapshot acquisition at that spot is a very easy fix, I hesitate to\n> fix it that way, because the whole idea there seems quite wasteful: we\n> have to look up, open and lock every single partition, on every single\n> autovacuum iteration through the database. That seems bad. I'm\n> inclined to think that a better idea may be to store reltuples for the\n> partitioned table in pg_class.reltuples, instead of having to add up the\n> reltuples of each partition. I haven't checked if this is likely to\n> break anything.\n> \n\nHow would that value get updated, for the parent?\n\n> (Also, a minor buglet: if we do ANALYZE (col1), then ANALYZE (col2) a\n> partition, then we repeatedly propagate the counts to the parent table,\n> so we would cause the parent to be analyzed more times than it should.\n> Sounds like we should not send the ancestor list when a column list is\n> given to manual analyze. I haven't verified this, however.)\n> \n\nAre you sure? I haven't tried, but shouldn't this be prevented by only\nsending the delta between the current and last reported value?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:13:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-08, Tomas Vondra wrote:\n\n> On 4/8/21 5:22 AM, Alvaro Herrera wrote:\n\n> > However, I just noticed there is a huge problem, which is that the new\n> > code in relation_needs_vacanalyze() is doing find_all_inheritors(), and\n> > we don't necessarily have a snapshot that lets us do that. While adding\n> > a snapshot acquisition at that spot is a very easy fix, I hesitate to\n> > fix it that way, because the whole idea there seems quite wasteful: we\n> > have to look up, open and lock every single partition, on every single\n> > autovacuum iteration through the database. That seems bad. I'm\n> > inclined to think that a better idea may be to store reltuples for the\n> > partitioned table in pg_class.reltuples, instead of having to add up the\n> > reltuples of each partition. I haven't checked if this is likely to\n> > break anything.\n> \n> How would that value get updated, for the parent?\n\nSame as for any other relation: ANALYZE would set it, after it's done\nscanning the table. We would to make sure that nothing resets it to\nempty, though, and that it doesn't cause issues elsewhere. (The patch I\nsent contains the minimal change to make it work, but of course that's\nmissing having other pieces of code maintain it.)\n\n> > (Also, a minor buglet: if we do ANALYZE (col1), then ANALYZE (col2) a\n> > partition, then we repeatedly propagate the counts to the parent table,\n> > so we would cause the parent to be analyzed more times than it should.\n> > Sounds like we should not send the ancestor list when a column list is\n> > given to manual analyze. I haven't verified this, however.)\n> \n> Are you sure? I haven't tried, but shouldn't this be prevented by only\n> sending the delta between the current and last reported value?\n\nI did try, and yes it behaves as you say.\n\n-- \n�lvaro Herrera Valdivia, Chile\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:27:57 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 4/8/21 5:27 PM, Alvaro Herrera wrote:\n> On 2021-Apr-08, Tomas Vondra wrote:\n> \n>> On 4/8/21 5:22 AM, Alvaro Herrera wrote:\n> \n>>> However, I just noticed there is a huge problem, which is that the new\n>>> code in relation_needs_vacanalyze() is doing find_all_inheritors(), and\n>>> we don't necessarily have a snapshot that lets us do that. While adding\n>>> a snapshot acquisition at that spot is a very easy fix, I hesitate to\n>>> fix it that way, because the whole idea there seems quite wasteful: we\n>>> have to look up, open and lock every single partition, on every single\n>>> autovacuum iteration through the database. That seems bad. I'm\n>>> inclined to think that a better idea may be to store reltuples for the\n>>> partitioned table in pg_class.reltuples, instead of having to add up the\n>>> reltuples of each partition. I haven't checked if this is likely to\n>>> break anything.\n>>\n>> How would that value get updated, for the parent?\n> \n> Same as for any other relation: ANALYZE would set it, after it's done\n> scanning the table. We would to make sure that nothing resets it to\n> empty, though, and that it doesn't cause issues elsewhere. (The patch I\n> sent contains the minimal change to make it work, but of course that's\n> missing having other pieces of code maintain it.)\n> \n\nSo ANALYZE would inspect the child relations, sum the reltuples and set\nit for the parent? IMO that'd be problematic because it'd mean we're\ncomparing the current number of changes with reltuples value which may\nbe arbitrarily stale (if we haven't analyzed the parent for a while).\n\nThat's essentially the issue I described when explaining why I think the\ncode needs to propagate the changes, reread the stats and then evaluate\nwhich relations need vacuuming. It's similar to the issue of comparing\nold changes_since_analyze vs. current reltuples, which is why the code\nis rereading the stats before checking the thresholds. This time it's\nthe opposite direction - the reltuples might be stale.\n\nFWIW I think the current refresh logic is not quite correct, because\nautovac_refresh_stats does some throttling (STATS_READ_DELAY). It\nprobably needs a \"force\" parameter to ensure it actually reads the\ncurrent stats in this one case.\n\n>>> (Also, a minor buglet: if we do ANALYZE (col1), then ANALYZE (col2) a\n>>> partition, then we repeatedly propagate the counts to the parent table,\n>>> so we would cause the parent to be analyzed more times than it should.\n>>> Sounds like we should not send the ancestor list when a column list is\n>>> given to manual analyze. I haven't verified this, however.)\n>>\n>> Are you sure? I haven't tried, but shouldn't this be prevented by only\n>> sending the delta between the current and last reported value?\n> \n> I did try, and yes it behaves as you say.\n> \n\nOK, good.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 8 Apr 2021 17:55:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-08, Tomas Vondra wrote:\n\n> On 4/8/21 5:27 PM, Alvaro Herrera wrote:\n>\n> > Same as for any other relation: ANALYZE would set it, after it's done\n> > scanning the table. We would to make sure that nothing resets it to\n> > empty, though, and that it doesn't cause issues elsewhere. (The patch I\n> > sent contains the minimal change to make it work, but of course that's\n> > missing having other pieces of code maintain it.)\n> \n> So ANALYZE would inspect the child relations, sum the reltuples and set\n> it for the parent? IMO that'd be problematic because it'd mean we're\n> comparing the current number of changes with reltuples value which may\n> be arbitrarily stale (if we haven't analyzed the parent for a while).\n\nWhat? Not at all. reltuples would be set by ANALYZE on one run, and\nthen the value is available for the future autovacuum run. That's how\nit works for regular tables too, so I'm not sure what you problem have\nwith that. The (possibly stale) reltuples value is multiplied by the\nscale factor, and added to the analyze_threshold value, and that's\ncompared with the current changes_since_analyze to determine whether to\nanalyze or not.\n\n> That's essentially the issue I described when explaining why I think the\n> code needs to propagate the changes, reread the stats and then evaluate\n> which relations need vacuuming. It's similar to the issue of comparing\n> old changes_since_analyze vs. current reltuples, which is why the code\n> is rereading the stats before checking the thresholds. This time it's\n> the opposite direction - the reltuples might be stale.\n\nWell, I don't think the issue is the same. reltuples is always stale,\neven for regular tables, because that's just how it works.\nchanges_since_analyze is not stale for regular tables, and that's why it\nmakes sense to propagate it from partitions to ancestors prior to\nchecking the analyze condition.\n\n> FWIW I think the current refresh logic is not quite correct, because\n> autovac_refresh_stats does some throttling (STATS_READ_DELAY). It\n> probably needs a \"force\" parameter to ensure it actually reads the\n> current stats in this one case.\n\nHmm ... good catch, but actually that throttling only applies to the\nlauncher. do_autovacuum runs in the worker, so there's no throttling.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:19:16 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 01:20:14AM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-07, Alvaro Herrera wrote:\n> \n> > OK, I bit the bullet and re-did the logic in the way I had proposed\n> > earlier in the thread: do the propagation on the collector's side, by\n> > sending only the list of ancestors: the collector can read the tuple\n> > change count by itself, to add it to each ancestor. This seems less\n> > wasteful. Attached is v16 which does it that way and seems to work\n> > nicely under my testing.\n> \n> Pushed with this approach. Thanks for persisting with this.\n\ncommit 0827e8af70f4653ba17ed773f123a60eadd9f9c9\n| This also introduces necessary reloptions support for partitioned tables\n| (autovacuum_enabled, autovacuum_analyze_scale_factor,\n| autovacuum_analyze_threshold). It's unclear how best to document this\n| aspect.\n\nAt least this part needs to be updated - see also ed62d3737.\n\ndoc/src/sgml/ref/create_table.sgml- The storage parameters currently\ndoc/src/sgml/ref/create_table.sgml- available for tables are listed below.\n...\ndoc/src/sgml/ref/create_table.sgml: Specifying these parameters for partitioned tables is not supported,\ndoc/src/sgml/ref/create_table.sgml- but you may specify them for individual leaf partitions.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Apr 2021 16:30:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Apr-08, Justin Pryzby wrote:\n\n> commit 0827e8af70f4653ba17ed773f123a60eadd9f9c9\n> | This also introduces necessary reloptions support for partitioned tables\n> | (autovacuum_enabled, autovacuum_analyze_scale_factor,\n> | autovacuum_analyze_threshold). It's unclear how best to document this\n> | aspect.\n> \n> At least this part needs to be updated - see also ed62d3737.\n> \n> doc/src/sgml/ref/create_table.sgml- The storage parameters currently\n> doc/src/sgml/ref/create_table.sgml- available for tables are listed below.\n> ...\n> doc/src/sgml/ref/create_table.sgml: Specifying these parameters for partitioned tables is not supported,\n> doc/src/sgml/ref/create_table.sgml- but you may specify them for individual leaf partitions.\n\nAh, thanks for pointing it out. How about the attached?\n\nThis new bit reads weird:\n\n+ Most parameters are not supported on partitioned tables, with exceptions\n+ noted below; you may specify them for individual leaf partitions.\n\n\nMaybe \"Most parameters are not supported on partitioned tables, with\nexceptions noted below; you may specify others for individual leaf\npartitions.\"\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W",
"msg_date": "Thu, 8 Apr 2021 17:56:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 05:56:25PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-08, Justin Pryzby wrote:\n> \n> > commit 0827e8af70f4653ba17ed773f123a60eadd9f9c9\n> > | This also introduces necessary reloptions support for partitioned tables\n> > | (autovacuum_enabled, autovacuum_analyze_scale_factor,\n> > | autovacuum_analyze_threshold). It's unclear how best to document this\n> > | aspect.\n> > \n> > At least this part needs to be updated - see also ed62d3737.\n> > \n> > doc/src/sgml/ref/create_table.sgml- The storage parameters currently\n> > doc/src/sgml/ref/create_table.sgml- available for tables are listed below.\n> > ...\n> > doc/src/sgml/ref/create_table.sgml: Specifying these parameters for partitioned tables is not supported,\n> > doc/src/sgml/ref/create_table.sgml- but you may specify them for individual leaf partitions.\n> \n> Ah, thanks for pointing it out. How about the attached?\n> \n> This new bit reads weird:\n> \n> + Most parameters are not supported on partitioned tables, with exceptions\n> + noted below; you may specify them for individual leaf partitions.\n\n\"Except where noted, these parameters are not supported on partitioned tables.\"\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Apr 2021 17:00:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Apr 08, 2021 at 05:56:25PM -0400, Alvaro Herrera wrote:\n>> This new bit reads weird:\n>> \n>> + Most parameters are not supported on partitioned tables, with exceptions\n>> + noted below; you may specify them for individual leaf partitions.\n\n> \"Except where noted, these parameters are not supported on partitioned\n> tables.\"\n\nI think what it's trying to get at is\n\n\"Except where noted, these parameters are not supported on partitioned\ntables. However, you can specify them on individual leaf partitions.\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Apr 2021 18:05:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-08 01:20:14 -0400, Alvaro Herrera wrote:\n> On 2021-Apr-07, Alvaro Herrera wrote:\n>\n> > OK, I bit the bullet and re-did the logic in the way I had proposed\n> > earlier in the thread: do the propagation on the collector's side, by\n> > sending only the list of ancestors: the collector can read the tuple\n> > change count by itself, to add it to each ancestor. This seems less\n> > wasteful. Attached is v16 which does it that way and seems to work\n> > nicely under my testing.\n>\n> Pushed with this approach. Thanks for persisting with this.\n\nI'm looking at this in the context of rebasing & polishing the shared\nmemory stats patch.\n\nI have a few questions / concerns:\n\n1) Somehow it seems like a violation to do stuff like\n get_partition_ancestors() in pgstat.c. It's nothing I can't live with, but\n it feels a bit off. Would likely not be too hard to address, e.g. by just\n putting some of pgstat_report_anl_ancestors in partition.c instead.\n\n2) Why does it make sense that autovacuum sends a stats message for every\n partition in the system that had any chances since the last autovacuum\n cycle? On a database with a good number of objects / a short naptime we'll\n often end up sending messages for the same set of tables from separate\n workers, because they don't yet see the concurrent\n tabentry->changes_since_analyze_reported.\n\n3) What is the goal of the autovac_refresh_stats() after the loop doing\n pgstat_report_anl_ancestors()? I think it'll be common that the stats\n collector hasn't even processed the incoming messages by that point, not to\n speak of actually having written out a new stats file. If it took less than\n 10ms (PGSTAT_RETRY_DELAY) to get to autovac_refresh_stats(),\n backend_read_statsfile() will not wait for a new stats file to be written\n out, and we'll just re-read the state we previously did.\n\n It's pretty expensive to re-read the stats file in some workloads, so I'm a\n bit concerned that we end up significantly increasing the amount of stats\n updates/reads, without actually gaining anything reliable?\n\n4) In the shared mem stats patch I went to a fair bit of trouble to try to get\n rid of pgstat_vacuum_stat() (which scales extremely poorly to larger\n systems). For that to work pending stats can only be \"staged\" while holding\n a lock on a relation that prevents the relation from being concurrently\n dropped (pending stats increment a refcount for the shared stats object,\n which ensures that we don't loose track of the fact that a stats object has\n been dropped, even when stats only get submitted later).\n\n I'm not yet clear on how to make this work for\n pgstat_report_anl_ancestors() - but I probably can find a way. But it does\n feel a bit off to issue stats stuff for tables we're not sure still exist.\n\n\nI'll go and read through the thread, but my first thought is that having a\nhashtable in do_autovacuum() that contains stats for partitioned tables would\nbe a good bit more efficient than the current approach? We already have a\nhashtable for each toast table, compared to that having a hashtable for each\npartitioned table doesn't seem like it'd be a problem?\n\nWith a small bit of extra work that could even avoid the need for the\nadditional pass through pg_class. Do the partitioned table data-gathering as\npart of the \"collect main tables to vacuum\" pass, and then do one of\n\na) do the perform-analyze decision purely off the contents of that\n partioned-table hash\nb) fetch the RELOID syscache entry by oid and then decide based on that\nc) handle partioned tableas as part of the \"check TOAST tables\" pass - it's\n not like we gain a meaningful amount of efficiency by using a ScanKey to\n filter for RELKIND_TOASTVALUE, given that there's no index, and that an\n index wouldn't commonly be useful given the percentage of toast tables in\n pg_class\n\nPartitioning makes it a bigger issue that do_autovacuum() does multiple passes\nthrough pg_class (as it makes scenarios in which pg_class is large more\ncommon), so I don't think it's great that partitioning also increases the\nnumber of passes through pg_class.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Jul 2021 13:54:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-22 13:54:58 -0700, Andres Freund wrote:\n> On 2021-04-08 01:20:14 -0400, Alvaro Herrera wrote:\n> > On 2021-Apr-07, Alvaro Herrera wrote:\n> >\n> > > OK, I bit the bullet and re-did the logic in the way I had proposed\n> > > earlier in the thread: do the propagation on the collector's side, by\n> > > sending only the list of ancestors: the collector can read the tuple\n> > > change count by itself, to add it to each ancestor. This seems less\n> > > wasteful. Attached is v16 which does it that way and seems to work\n> > > nicely under my testing.\n> >\n> > Pushed with this approach. Thanks for persisting with this.\n> \n> I'm looking at this in the context of rebasing & polishing the shared\n> memory stats patch.\n> \n> I have a few questions / concerns:\n\nAnother one, and I think this might warrant thinking about for v14:\n\nIsn't this going to create a *lot* of redundant sampling? Especially if you\nhave any sort of nested partition tree. In the most absurd case a partition\nwith n parents will get sampled n times, solely due to changes to itself.\n\nLook at the following example:\n\nBEGIN;\nDROP TABLE if exists p;\nCREATE TABLE p (i int) partition by range(i);\nCREATE TABLE p_0 PARTITION OF p FOR VALUES FROM ( 0) to (5000) partition by range(i);\nCREATE TABLE p_0_0 PARTITION OF p_0 FOR VALUES FROM ( 0) to (1000);\nCREATE TABLE p_0_1 PARTITION OF p_0 FOR VALUES FROM (1000) to (2000);\nCREATE TABLE p_0_2 PARTITION OF p_0 FOR VALUES FROM (2000) to (3000);\nCREATE TABLE p_0_3 PARTITION OF p_0 FOR VALUES FROM (3000) to (4000);\nCREATE TABLE p_0_4 PARTITION OF p_0 FOR VALUES FROM (4000) to (5000);\n-- create some initial data\nINSERT INTO p select generate_series(0, 5000 - 1) data FROM generate_series(1, 100) reps;\nCOMMIT;\n\nUPDATE p_0_4 SET i = i;\n\n\nWhenever the update is executed, all partitions will be sampled at least twice\n(once for p and once for p_0), with p_0_4 sampled three times.\n\nOf course, this is an extreme example, but it's not hard to imagine cases\nwhere v14 will cause the number of auto-analyzes increase sufficiently to bog\ndown autovacuum to a problematic degree.\n\n\nAdditionally, while analyzing all child partitions for a partitioned tables\nare AccessShareLock'ed at once. If a partition hierarchy has more than one\nlevel, it actually is likely that multiple autovacuum workers will end up\nprocessing the ancestors separately. This seems like it might contribute to\nlock exhaustion issues with larger partition hierarchies?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Jul 2021 19:23:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nCCing RMT because I think we need to do something about this for v14.\n\nOn 2021-07-27 19:23:42 -0700, Andres Freund wrote:\n> On 2021-07-22 13:54:58 -0700, Andres Freund wrote:\n> > On 2021-04-08 01:20:14 -0400, Alvaro Herrera wrote:\n> > > On 2021-Apr-07, Alvaro Herrera wrote:\n> > >\n> > > > OK, I bit the bullet and re-did the logic in the way I had proposed\n> > > > earlier in the thread: do the propagation on the collector's side, by\n> > > > sending only the list of ancestors: the collector can read the tuple\n> > > > change count by itself, to add it to each ancestor. This seems less\n> > > > wasteful. Attached is v16 which does it that way and seems to work\n> > > > nicely under my testing.\n> > >\n> > > Pushed with this approach. Thanks for persisting with this.\n> >\n> > I'm looking at this in the context of rebasing & polishing the shared\n> > memory stats patch.\n> >\n> > I have a few questions / concerns:\n>\n> Another one, and I think this might warrant thinking about for v14:\n>\n> Isn't this going to create a *lot* of redundant sampling? Especially if you\n> have any sort of nested partition tree. In the most absurd case a partition\n> with n parents will get sampled n times, solely due to changes to itself.\n>\n> Look at the following example:\n>\n> BEGIN;\n> DROP TABLE if exists p;\n> CREATE TABLE p (i int) partition by range(i);\n> CREATE TABLE p_0 PARTITION OF p FOR VALUES FROM ( 0) to (5000) partition by range(i);\n> CREATE TABLE p_0_0 PARTITION OF p_0 FOR VALUES FROM ( 0) to (1000);\n> CREATE TABLE p_0_1 PARTITION OF p_0 FOR VALUES FROM (1000) to (2000);\n> CREATE TABLE p_0_2 PARTITION OF p_0 FOR VALUES FROM (2000) to (3000);\n> CREATE TABLE p_0_3 PARTITION OF p_0 FOR VALUES FROM (3000) to (4000);\n> CREATE TABLE p_0_4 PARTITION OF p_0 FOR VALUES FROM (4000) to (5000);\n> -- create some initial data\n> INSERT INTO p select generate_series(0, 5000 - 1) data FROM generate_series(1, 100) reps;\n> COMMIT;\n>\n> UPDATE p_0_4 SET i = i;\n>\n>\n> Whenever the update is executed, all partitions will be sampled at least twice\n> (once for p and once for p_0), with p_0_4 sampled three times.\n>\n> Of course, this is an extreme example, but it's not hard to imagine cases\n> where v14 will cause the number of auto-analyzes increase sufficiently to bog\n> down autovacuum to a problematic degree.\n>\n>\n> Additionally, while analyzing all child partitions for a partitioned tables\n> are AccessShareLock'ed at once. If a partition hierarchy has more than one\n> level, it actually is likely that multiple autovacuum workers will end up\n> processing the ancestors separately. This seems like it might contribute to\n> lock exhaustion issues with larger partition hierarchies?\n\n\nI started to write a patch rejiggering autovacuum.c portion of this\nchange. While testing it I hit the case of manual ANALYZEs leaving\nchanges_since_analyze for partitioned tables in a bogus state - without a\nminimally invasive way to fix that. After a bit of confused staring I realized\nthat the current code has a very similar problem:\n\nUsing the same setup as above:\n\nINSERT INTO p VALUES (0,0); /* repeat as many times as desired */\nANALYZE p_0_0;\n\nAt this point the system will have lost track of the changes to p_0_0, unless\nan autovacuum worker was launched between the INSERTs and the ANALYZE (which\nwould cause pgstat_report_anl_ancestors() to report the change count upwards).\n\nThere appears to be code trying to address that, but I don't see how it\never does anything meaningful?\n\n\t/*\n\t * Now report ANALYZE to the stats collector. For regular tables, we do\n\t * it only if not doing inherited stats. For partitioned tables, we only\n\t * do it for inherited stats. (We're never called for not-inherited stats\n\t * on partitioned tables anyway.)\n\t *\n\t * Reset the changes_since_analyze counter only if we analyzed all\n\t * columns; otherwise, there is still work for auto-analyze to do.\n\t */\n\tif (!inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n\t\tpgstat_report_analyze(onerel, totalrows, totaldeadrows,\n\t\t\t\t\t\t\t (va_cols == NIL));\n\n\t/*\n\t * If this is a manual analyze of all columns of a permanent leaf\n\t * partition, and not doing inherited stats, also let the collector know\n\t * about the ancestor tables of this partition. Autovacuum does the\n\t * equivalent of this at the start of its run, so there's no reason to do\n\t * it there.\n\t */\n\tif (!inh && !IsAutoVacuumWorkerProcess() &&\n\t\t(va_cols == NIL) &&\n\t\tonerel->rd_rel->relispartition &&\n\t\tonerel->rd_rel->relkind == RELKIND_RELATION &&\n\t\tonerel->rd_rel->relpersistence == RELPERSISTENCE_PERMANENT)\n\t{\n\t\tpgstat_report_anl_ancestors(RelationGetRelid(onerel));\n\t}\n\nThe pgstat_report_analyze() triggers pgstat_recv_analyze() to reset the\ncounter that pgstat_recv_anl_ancestors() would use to report changes\nupwards:\n\n\t/*\n\t * If commanded, reset changes_since_analyze to zero. This forgets any\n\t * changes that were committed while the ANALYZE was in progress, but we\n\t * have no good way to estimate how many of those there were.\n\t */\n\tif (msg->m_resetcounter)\n\t{\n\t\ttabentry->changes_since_analyze = 0;\n\t\ttabentry->changes_since_analyze_reported = 0;\n\t}\n\nAnd if one instead inverts the order of pgstat_report_analyze() and\npgstat_report_anl_ancestors() one gets a slightly different problem: A manual\nANALYZE of the partition root results in the partition root having a non-zero\nchanges_since_analyze afterwards. expand_vacuum() causes child partitions to be\nadded to the list of relations, which *first* causes the partition root to be\nanalyzed, and *then* partitions. The partitions then report their\nchanges_since_analyze upwards.\n\n\nI don't think the code as is is fit for v14. It looks like it was rewritten\nwith a new approach just before the freeze ([1]), and as far as I can tell the\nconcerns I quoted above weren't even discussed in the whole thread. Alvaro,\nany comments?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20210408032235.GA6842%40alvherre.pgsql\n\n\n",
"msg_date": "Thu, 29 Jul 2021 18:03:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "\nOn 7/29/21 9:03 PM, Andres Freund wrote:\n> Hi,\n>\n> CCing RMT because I think we need to do something about this for v14.\n\n\n\nThanks. We are now aware of it.\n\n\n[...]\n\n> I don't think the code as is is fit for v14. It looks like it was rewritten\n> with a new approach just before the freeze ([1]), and as far as I can tell the\n> concerns I quoted above weren't even discussed in the whole thread. Alvaro,\n> any comments?\n>\n\nI discussed this briefly with Alvaro late last night. He's now aware of\nthe issue, but I believe he's away for some days, and probably won't be\nable to respond until his return.\n\n\nSorry I don't have more news, but I didn't want anyone thinking this was\nbeing ignored.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 3 Aug 2021 15:55:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "At Thu, 29 Jul 2021 18:03:55 -0700, Andres Freund <andres@anarazel.de> wrote in \n> And if one instead inverts the order of pgstat_report_analyze() and\n> pgstat_report_anl_ancestors() one gets a slightly different problem: A manual\n> ANALYZE of the partition root results in the partition root having a non-zero\n> changes_since_analyze afterwards. expand_vacuum() causes child partitions to be\n> added to the list of relations, which *first* causes the partition root to be\n> analyzed, and *then* partitions. The partitions then report their\n> changes_since_analyze upwards.\n\nFor the last behavior, as Andres suggested, the scan order need to be\nreversed (or to be in the same order with autovacuum). Since\nfind_all_inheritors scans breadth-first so just reversing the result\nworks. The breadth-first is currently not in the contract of the\ninterface of the function. I suppose we can add such a contract?\n\nFinally, I ended up with the attached.\n\n - reverse the relation order within a tree\n - reverse the order of pgstat_report_analyze and pgstat_report_analyze.\n\nInheritance expansion is performed per-tree basis so it works fine\neven if multiple relations are given to vacuum().\n\n\n> I don't think the code as is is fit for v14. It looks like it was rewritten\n> with a new approach just before the freeze ([1]), and as far as I can tell the\n> concerns I quoted above weren't even discussed in the whole thread. Alvaro,\n> any comments?\n> \n> Greetings,\n> \n> Andres Freund\n> \n> [1] https://www.postgresql.org/message-id/20210408032235.GA6842%40alvherre.pgsql\n\nFYI: this bahaves as the follows.\n\nCREATE TABLE p (a int) PARTITION BY RANGE (a);\nCREATE TABLE c1 PARTITION OF p FOR VALUES FROM (0) TO (200) PARTITION BY RANGE(a);\nCREATE TABLE c11 PARTITION OF c1 FOR VALUES FROM (0) TO (100);\nCREATE TABLE c12 PARTITION OF c1 FOR VALUES FROM (100) TO (200);\nCREATE TABLE c2 PARTITION OF p FOR VALUES FROM (200) TO (400) PARTITION BY RANGE(a);\nCREATE TABLE c21 PARTITION OF c2 FOR VALUES FROM (200) TO (300);\nCREATE TABLE c22 PARTITION OF c2 FOR VALUES FROM (300) TO (400);\nINSERT INTO p (SELECT a FROM generate_series(0, 400 - 1) a, generate_series(0, 10) b);\n\n\nINSERT INTO p (SELECT 200 FROM generate_series(0, 99));\n\nSELECT relid, relname, n_mod_since_analyze FROM pg_stat_user_tables ORDER BY relid;\n relid | relname | n_mod_since_analyze \n-------+---------+---------------------\n 16426 | p | 0\n 16429 | c1 | 0\n 16432 | c11 | 0\n 16435 | c12 | 0\n 16438 | c2 | 0\n 16441 | c21 | 100\n 16444 | c22 | 0\n 16447 | sa | 0\n(8 rows)\n\nAfter \"ANALYZE c21;\"\n relid | relname | n_mod_since_analyze \n-------+---------+---------------------\n 16426 | p | 100\n 16429 | c1 | 0\n 16432 | c11 | 0\n 16435 | c12 | 0\n 16438 | c2 | 100\n 16441 | c21 | 0\n 16444 | c22 | 0\n 16447 | sa | 0\n\nAfter \"ANALYZE c2;\"\n relid | relname | n_mod_since_analyze \n-------+---------+---------------------\n 16426 | p | 100\n 16429 | c1 | 0\n 16432 | c11 | 0\n 16435 | c12 | 0\n 16438 | c2 | 0\n 16441 | c21 | 0\n 16444 | c22 | 0\n 16447 | sa | 0\n\nAfter \"ANALYZE p;\"\n(all zero)\n\n\nHowever, this gives a strange-looking side-effect, which affected\nregression results.\n\n=# VACUUM ANALYZE p(a, a);\nERROR: column \"a\" of relation \"c22\" appears more than once\n\n(Prevously it complained about p.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 04 Aug 2021 14:22:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi\n\nOn 2021-Jul-27, Andres Freund wrote:\n\n> Isn't this going to create a *lot* of redundant sampling? Especially if you\n> have any sort of nested partition tree. In the most absurd case a partition\n> with n parents will get sampled n times, solely due to changes to itself.\n\nIt seems to me that you're barking up the wrong tree on this point.\nThis problem you describe is not something that was caused by this\npatch; ANALYZE has always worked like this. We have discussed the idea\nof avoiding redundant sampling, but it's clear that it is not a simple\nproblem, and solving it was not in scope for this patch.\n\n> Additionally, while analyzing all child partitions for a partitioned tables\n> are AccessShareLock'ed at once. If a partition hierarchy has more than one\n> level, it actually is likely that multiple autovacuum workers will end up\n> processing the ancestors separately. This seems like it might contribute to\n> lock exhaustion issues with larger partition hierarchies?\n\nI agree this seems a legitimate problem.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 9 Aug 2021 16:02:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-09 16:02:33 -0400, Alvaro Herrera wrote:\n> On 2021-Jul-27, Andres Freund wrote:\n> \n> > Isn't this going to create a *lot* of redundant sampling? Especially if you\n> > have any sort of nested partition tree. In the most absurd case a partition\n> > with n parents will get sampled n times, solely due to changes to itself.\n> \n> It seems to me that you're barking up the wrong tree on this point.\n> This problem you describe is not something that was caused by this\n> patch; ANALYZE has always worked like this. We have discussed the idea\n> of avoiding redundant sampling, but it's clear that it is not a simple\n> problem, and solving it was not in scope for this patch.\n\nI don't agree. There's a difference between this happening after a manual\nANALYZE on partition roots, and this continuously happening in production\nworkloads due to auto-analyzes...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Aug 2021 15:25:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hello,\n\nOn 2021-Jul-22, Andres Freund wrote:\n\n> 1) Somehow it seems like a violation to do stuff like\n> get_partition_ancestors() in pgstat.c. It's nothing I can't live with, but\n> it feels a bit off. Would likely not be too hard to address, e.g. by just\n> putting some of pgstat_report_anl_ancestors in partition.c instead.\n\nI understand the complain about this being a modularity violation -- the\npoint being that pgstat.c has no business accessing system catalogs at all.\nBefore this function, all pgstat_report_* functions were just assembling\na message from counters accumulated somewhere and sending the bytes to\nthe collector, and this new function is a deviation from that.\n\nIt seems that we could improve this by having a function (maybe in\npartition.c as you propose), something like\n\nstatic void\nreport_partition_ancestors(Oid relid)\n{\n\tancestors = get_partition_ancestors( ... );\n\tarray = palloc(sizeof(Oid) * list_length(ancestors));\n\tforeach(lc, ancestors)\n\t{\n\t\tarray[i++] = lfirst_oid(lc);\n\t}\n\tpgstat_report_partition_ancestors(oid, array);\n}\n\nand then pgstat.c works with the given array without having to consult\nsystem catalogs.\n\n> 2) Why does it make sense that autovacuum sends a stats message for every\n> partition in the system that had any [changes] since the last autovacuum\n> cycle? On a database with a good number of objects / a short naptime we'll\n> often end up sending messages for the same set of tables from separate\n> workers, because they don't yet see the concurrent\n> tabentry->changes_since_analyze_reported.\n\nThe traffic could be large, yeah, and I agree it seems undesirable. If\ncollector kept a record of the list of ancestors of each table, then we\nwouldn't need to do this (we would have to know if collector knows a\nparticular partition or not, though ... I have no ideas on that.)\n\n> 3) What is the goal of the autovac_refresh_stats() after the loop doing\n> pgstat_report_anl_ancestors()? I think it'll be common that the stats\n> collector hasn't even processed the incoming messages by that point, not to\n> speak of actually having written out a new stats file. If it took less than\n> 10ms (PGSTAT_RETRY_DELAY) to get to autovac_refresh_stats(),\n> backend_read_statsfile() will not wait for a new stats file to be written\n> out, and we'll just re-read the state we previously did.\n> \n> It's pretty expensive to re-read the stats file in some workloads, so I'm a\n> bit concerned that we end up significantly increasing the amount of stats\n> updates/reads, without actually gaining anything reliable?\n\nThis is done once per autovacuum run and the point is precisely to let\nthe next block absorb the updates that were sent. In manual ANALYZE we\ndo it to inform future autovacuum runs.\n\nNote that the PGSTAT_RETRY_DELAY limit is used by the autovac launcher\nonly, and this code is running in the worker; we do flush out the old\ndata. Yes, it's expensive, but we're not doing it once per table, just\nonce per worker run.\n\n> 4) In the shared mem stats patch I went to a fair bit of trouble to try to get\n> rid of pgstat_vacuum_stat() (which scales extremely poorly to larger\n> systems). For that to work pending stats can only be \"staged\" while holding\n> a lock on a relation that prevents the relation from being concurrently\n> dropped (pending stats increment a refcount for the shared stats object,\n> which ensures that we don't loose track of the fact that a stats object has\n> been dropped, even when stats only get submitted later).\n> \n> I'm not yet clear on how to make this work for\n> pgstat_report_anl_ancestors() - but I probably can find a way. But it does\n> feel a bit off to issue stats stuff for tables we're not sure still exist.\n\nI assume you refer to locking the *partition*, right? You're not\ntalking about locking the ancestor mentioned in the message. I don't\nknow how does the shmem-collector work, but it shouldn't be a problem\nthat an ancestor goes away (ALTER TABLE parent DETACH; DROP TABLE\nparent); as long as you've kept a lock on the partition, it should be\nfine. Or am I misinterpreting what you mean?\n\n> I'll go and read through the thread, but my first thought is that having a\n> hashtable in do_autovacuum() that contains stats for partitioned tables would\n> be a good bit more efficient than the current approach? We already have a\n> hashtable for each toast table, compared to that having a hashtable for each\n> partitioned table doesn't seem like it'd be a problem?\n\n> With a small bit of extra work that could even avoid the need for the\n> additional pass through pg_class. Do the partitioned table data-gathering as\n> part of the \"collect main tables to vacuum\" pass, and then do one of\n\nI'll have to re-read the thread to remember why did I make it a separate\npass. I think I did it that way because otherwise there was a\nrequirement on the pg_class scan order. (Some earlier version of the\npatch did not have a separate pass and there was some problem or other.\nMaybe you're right that a hash table is sufficient.)\n\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"We're here to devour each other alive\" (Hobbes)\n\n\n",
"msg_date": "Mon, 9 Aug 2021 20:10:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Aug-09, Andres Freund wrote:\n\n> I don't agree. There's a difference between this happening after a manual\n> ANALYZE on partition roots, and this continuously happening in production\n> workloads due to auto-analyzes...\n\nHmm. That's not completely untrue.\n\nI bring a radical proposal that may be sufficient to close this\nparticular hole. What if we made partition only affected their\ntop-level parents to become auto-analyzed, and not any intermediate\nancestors? Any intermediate partitioned partitions could be analyzed\nmanually if the user wished, and perhaps some reloption could enable\nautovacuum to do it (with the caveat that it'd cause multiple sampling\nof partitions). I don't yet have a clear picture on how to implement\nthis, but I'll explore it while waiting for opinions on the idea.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\n\n",
"msg_date": "Tue, 10 Aug 2021 09:27:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Aug-10, Alvaro Herrera wrote:\n\n> I bring a radical proposal that may be sufficient to close this\n> particular hole. What if we made partition only affected their\n> top-level parents to become auto-analyzed, and not any intermediate\n> ancestors? Any intermediate partitioned partitions could be analyzed\n> manually if the user wished, and perhaps some reloption could enable\n> autovacuum to do it (with the caveat that it'd cause multiple sampling\n> of partitions). I don't yet have a clear picture on how to implement\n> this, but I'll explore it while waiting for opinions on the idea.\n\nSo, with this patch (a quick and dirty job) we no longer sample all\npartitions twice; we no longer propagate the tuple counts to p_0.\nWe don't have stats on p_0 anymore, only on p and on the individual\npartitions.\n\nI didn't move the new #include to a more decent place because\n1. that stuff is going to move to partition.c as a new function,\nincluding the new include;\n2. that new function also needs to read the reloptions for p_0 to allow\nthe user to enable stat acquisition for p_0 with \"alter table p_0 set\n(autovacuum_enabled=1)\";\n3. need to avoid reporting ancestors of a partition repeatedly, which\nforestalls the performance objection about reading reloptions too\nfrequently.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 10 Aug 2021 17:38:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Aug-09, Alvaro Herrera wrote:\n\n> > 3) What is the goal of the autovac_refresh_stats() after the loop doing\n> > pgstat_report_anl_ancestors()? I think it'll be common that the stats\n> > collector hasn't even processed the incoming messages by that point, not to\n> > speak of actually having written out a new stats file. If it took less than\n> > 10ms (PGSTAT_RETRY_DELAY) to get to autovac_refresh_stats(),\n> > backend_read_statsfile() will not wait for a new stats file to be written\n> > out, and we'll just re-read the state we previously did.\n> > \n> > It's pretty expensive to re-read the stats file in some workloads, so I'm a\n> > bit concerned that we end up significantly increasing the amount of stats\n> > updates/reads, without actually gaining anything reliable?\n> \n> This is done once per autovacuum run and the point is precisely to let\n> the next block absorb the updates that were sent. In manual ANALYZE we\n> do it to inform future autovacuum runs.\n> \n> Note that the PGSTAT_RETRY_DELAY limit is used by the autovac launcher\n> only, and this code is running in the worker; we do flush out the old\n> data. Yes, it's expensive, but we're not doing it once per table, just\n> once per worker run.\n\nI misunderstood what you were talking about here -- I thought it was\nabout the delay in autovac_refresh_stats (STATS_READ_DELAY, 1s). Now\nthat I look at this again I realize what your point is, and you're\nright, there isn't sufficient time for the collector to absorb the\nmessages we sent before the next scan pg_class scan starts.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n",
"msg_date": "Tue, 10 Aug 2021 19:00:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "After thinking about the described issues for a while, my proposal is to\ncompletely revamp the way this feature works. See below.\n\nNow, the proposal seems awfully invasive, but it's *the* way I see to\navoid the pgstat traffic. For pg14, maybe we can live with it, and just\nuse the smaller patches that Horiguchi-san and I have posted, which\nsolve the other issues; also, Euler Taveira suggested that we could add\na reloption to turn the feature off completely for some tables (maybe\nmake it off by default and have a reloption to turn it on for specific\npartition hierarchies), so that it doesn't cause unduly pain for people\nwith large partitioning hierarchies.\n\n\n* PgStat_StatTabEntry gets a new \"Oid reportAncestorOid\" member. This is\n the OID of a single partitioned ancestor, to which the changed-tuple\n counts are propagated up.\n Normally this is the topmost ancestor; but if the user wishes some\n intermediate ancestor to receive the counts they can use\n ALTER TABLE the_intermediate_ancestor SET (autovacuum_enabled=on).\n\n* Corollary 1: for the normal case of single-level partitioning, the\n parent partitioned table behaves as currently.\n\n* Corollary 2: for multi-level partitioning with no especially\n configured intermediate ancestors, only the leaf partitions and the\n top-level partitioned table will be analyzed. Intermediate ancestors\n are ignored by autovacuum.\n\n* Corollary 3: for multi-level partitioning with some intermediate\n ancestor(s) marked as autovacuum_enabled=on, that ancestor will\n receive all the counts from all of its partitions, so it will get\n analyzed itself; and it'll also forward those counts up to its\n report-ancestor.\n\n\n* On ALTER TABLE .. ATTACH PARTITION or CREATE TABLE PARTITION AS,\n we send a message to collector with the analyze-ancestor OID.\n\n* Backends running manual ANALYZE as well as autovacuum will examine\n each table's \"relispartition\" flag and its pgstat table entry; if it\n is a partition and doesn't have reportAncestorOid set, determine which\n ancestor should analyze counts be reported to; include this OID in the\n regular PgStat_MsgAnalyze. This fixes the situation after a crash or\n other stats reset. Also, it's not unduly expensive to do, because\n it's only in the rare case that the value sent by ATTACH was lost.\n\n* Possible race condition in the previous step may cause multiple\n backends to send the same info. Not a serious problem so we don't try\n to handle it.\n\n* When tuple change counts for a partition are received by\n pgstat_recv_tabstat, they are propagated up to the indicated parent\n table in addition to being saved in the table itself.\n (Bonus points: when a table is attached or detached as a partition,\n the live tuples count is propagated to the newly acquired parent.)\n\n\nWhat do people think of this?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 11 Aug 2021 18:33:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-11 18:33:07 -0400, Alvaro Herrera wrote:\n> After thinking about the described issues for a while, my proposal is to\n> completely revamp the way this feature works. See below.\n>\n> Now, the proposal seems awfully invasive, but it's *the* way I see to\n> avoid the pgstat traffic. For pg14, maybe we can live with it, and just\n> use the smaller patches that Horiguchi-san and I have posted, which\n> solve the other issues; also, Euler Taveira suggested that we could add\n> a reloption to turn the feature off completely for some tables (maybe\n> make it off by default and have a reloption to turn it on for specific\n> partition hierarchies), so that it doesn't cause unduly pain for people\n> with large partitioning hierarchies.\n\nI think we should revert the changes for 14 - to me the feature clearly isn't\nmature enough to be released.\n\n\n> * PgStat_StatTabEntry gets a new \"Oid reportAncestorOid\" member. This is\n> the OID of a single partitioned ancestor, to which the changed-tuple\n> counts are propagated up.\n> Normally this is the topmost ancestor; but if the user wishes some\n> intermediate ancestor to receive the counts they can use\n> ALTER TABLE the_intermediate_ancestor SET (autovacuum_enabled=on).\n>\n> * Corollary 1: for the normal case of single-level partitioning, the\n> parent partitioned table behaves as currently.\n>\n> * Corollary 2: for multi-level partitioning with no especially\n> configured intermediate ancestors, only the leaf partitions and the\n> top-level partitioned table will be analyzed. Intermediate ancestors\n> are ignored by autovacuum.\n>\n> * Corollary 3: for multi-level partitioning with some intermediate\n> ancestor(s) marked as autovacuum_enabled=on, that ancestor will\n> receive all the counts from all of its partitions, so it will get\n> analyzed itself; and it'll also forward those counts up to its\n> report-ancestor.\n\nThis seems awfully confusing to me.\n\nOne fundamental issue here is that we separately build stats for partitioned\ntables and partitions. Can we instead tackle this by reusing the stats for\npartitions for the inheritance stats? I think it's a bit easier to do that\nfor partitioned tables than for old school inheritance roots, because there's\nno other rows in partitioned tables.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Aug 2021 02:35:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Here is a proposal for 14. This patch has four main changes:\n\n* The mod counts are only propagated to the topmost parent, not to each ancestor. This means that we'll only analyze the topmost partitioned table and not each intermediate partitioned table; seems a good compromise to avoid sampling all partitions multiple times per round.\n\n* One pgstat message is sent containing many partition/parent pairs, not just one. This reduces the number of messages sent. 123 partitions fit in one message (messages are 1000 bytes). This is done once per autovacuum worker run, so it shouldn't be too bad.\n\n* There's a sleep between sending the message and re-reading stats. It would be great to have a mechanism by which pgstat collector says \"I've received and processed up to this point\", but we don't have that; what we can do is sleep PGSTAT_STAT_INTERVAL and then reread the file, so we're certain that the file we read is at least as new as that time. This is far longer than it takes to process the messages. Note that if the messages do take longer than that to be processed by the collector, it's not a big loss anyway; those tables will be processed by the next autovacuum run.\n\n* I changed vacuum_expand_rel to put the main-rel OID at the end. (This is a variation of Horiguchi-san proposed patch; instead of making the complete list be in the opposite order, it's just that one OID that appears at the other end). This has the same effect as his patch: any error reports thrown by vacuum/analyze mention the first partition rather than the main table. This part is in 0002 and I'm not totally convinced it's a sane idea.\n\nMinor changes:\n* I reduced autovacuum from three passes over pg_class to two passes, per your observation that we can acquire toast association together with processing partitions, and then use that in the second pass to collect everything.\n\n* I moved the catalog-accessing code to partition.c, so we don't need to have pgstat.c doing it.\n\nSome doc changes are pending, and some more commentary in parts of the code, but I think this is much more sensible. I do lament the lack of a syscache for pg_inherits.",
"msg_date": "Fri, 13 Aug 2021 14:50:16 -0400",
"msg_from": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Aug-13, Álvaro Herrera wrote:\n\n> Some doc changes are pending, and some more commentary in parts of the\n> code, but I think this is much more sensible. I do lament the lack of\n> a syscache for pg_inherits.\n\nThinking about this again, this one here is the killer problem, I think;\nthis behaves pretty horribly if you have more than one partition level,\nbecause it'll have to do one indexscan *per level per partition*. (For\nexample, five partitions two levels down mean ten index scans). There's\nno cache for this, and no way to disable it. So for situations with a\nlot of partitions, it could be troublesome. Granted, it only needs to\nbe done for partitions with DML changes since the previous autovacuum\nworker run in the affected database, but still it could be significant.\n\nNow we could perhaps have a hash table in partition_analyze_report_ancestors()\nto avoid the need for repeated indexscans for partitions of the same\nhierarchy (an open-coded cache to take the place of the missing\npg_inherits syscache); and perhaps even use a single seqscan of\npg_inherits to capture the whole story first and then filter down to the\npartitions that we were asked to process ... (so are we building a\nmini-optimizer to determine which strategy to use in each case?).\n\nThat all sounds too much to be doing in the beta.\n\nSo I'm leaning towards the idea that we need to revert the patch and\nstart over for pg15.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n",
"msg_date": "Mon, 16 Aug 2021 10:03:18 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Here's the reversal patch for the 14 branch. (It applies cleanly to\nmaster, but the unused member of PgStat_StatTabEntry needs to be\nremoved and catversion bumped).\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)",
"msg_date": "Mon, 16 Aug 2021 11:46:16 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org> writes:\n> Here's the reversal patch for the 14 branch. (It applies cleanly to\n> master, but the unused member of PgStat_StatTabEntry needs to be\n> removed and catversion bumped).\n\nI don't follow the connection to catversion?\n\nI agree that we probably don't want to change PgStat_StatTabEntry in\nv14 at this point. But it'd be a good idea to attach a comment to\nthe entry saying it's unused but left there for ABI reasons.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Aug 2021 12:17:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Aug-16, Tom Lane wrote:\n\n> =?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org> writes:\n> > Here's the reversal patch for the 14 branch. (It applies cleanly to\n> > master, but the unused member of PgStat_StatTabEntry needs to be\n> > removed and catversion bumped).\n> \n> I don't follow the connection to catversion?\n\nSorry, I misspoke -- I mean PGSTAT_FORMAT_FILE_ID. I shouldn't just\nchange it, since if I do then the file is reported as corrupted and all\ncounters are lost. So in the posted patch I did as you suggest:\n\n> I agree that we probably don't want to change PgStat_StatTabEntry in\n> v14 at this point. But it'd be a good idea to attach a comment to\n> the entry saying it's unused but left there for ABI reasons.\n\nIt's only in branch master that I'd change the pgstat format version and\nremove the field. This is what I meant with the patch being for v14 and\na tweak needed for this in master.\n\nA catversion bump would be required to change the definition of\npg_stat_user_tables, which the patch being reverted originally changed\nto include relkind 'p'. A straight revert would remove that, but in my\nreversal patch I chose to keep it in place.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Mon, 16 Aug 2021 13:00:00 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Another possible problem is that before the revert, we accept\nALTER TABLE some_partitioned_table SET (autovacuum_enabled=on/off);\n(also autovacuum_analyze_scale_factor and autovacuum_analyze_threshold)\nbut after the revert this is will throw a syntax error. What do people\nthink we should do about that?\n\n1. Do nothing. If somebody finds in that situation, they can use\n ALTER TABLE .. RESET ...\n to remove the settings.\n\n2. Silently accept the option and do nothing.\n3. Accept the option and throw a warning that it's a no-op.\n4. Something else\n\nOpinions?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n\n",
"msg_date": "Mon, 16 Aug 2021 13:13:55 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Aug-16, Álvaro Herrera wrote:\n\n> Here's the reversal patch for the 14 branch. (It applies cleanly to\n> master, but the unused member of PgStat_StatTabEntry needs to be\n> removed and catversion bumped).\n\nI have pushed this to both branches. (I did not remove the item from\nthe release notes in the 14 branch.)\n\nIt upsets me to have reverted it, but after spending so much time trying\nto correct the problems, I believe it just wasn't salvageable within the\nbeta-period code freeze constraints. I described the issues I ran into\nin earlier messages; I think a good starting point to re-develop this is\nto revert the reversal commit, then apply my patch at\nhttps://postgr.es/m/0794d7ca-5183-486b-9c5e-6d434867cecd@www.fastmail.com\nthen do something about the remaining problems that were complained\nabout. (Maybe: add an \"ancestor OID\" member to PgStat_StatTabEntry so\nthat the collector knows to propagate counts from children to ancestors\nwhen the upd/ins/del counts are received. However, consider developing\nit as follow-up to Horiguchi-san's shmem pgstat rather than current\npgstat implementation.)\n\nThanks\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 16 Aug 2021 17:42:48 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 05:42:48PM -0400, �lvaro Herrera wrote:\n> On 2021-Aug-16, �lvaro Herrera wrote:\n> \n> > Here's the reversal patch for the 14 branch. (It applies cleanly to\n> > master, but the unused member of PgStat_StatTabEntry needs to be\n> > removed and catversion bumped).\n> \n> I have pushed this to both branches. (I did not remove the item from\n> the release notes in the 14 branch.)\n\n| I retained the addition of relkind 'p' to tables included by\n| pg_stat_user_tables, because reverting that would require a catversion\n| bump.\n\nRight now, on v15dev, it shows 0, which is misleading.\nShouldn't it be null ?\n\nanalyze_count | 0\n\nNote that having analyze_count and last_analyze would be an an independently\nuseful change. Since parent tables aren't analyzed automatically, I have a\nscript to periodically process them if they weren't processed recently. Right\nnow, for partitioned tables, the best I could find is to check its partitions:\n| MIN(last_analyzed) FROM pg_stat_all_tables psat JOIN pg_inherits i ON psat.relid=i.inhrelid\n\nIn 20200418050815.GE26953@telsasoft.com I wrote:\n|This patch includes partitioned tables in pg_stat_*_tables, which is great; I\n|complained awhile ago that they were missing [0]. It might be useful if that\n|part was split out into a separate 0001 patch (?).\n| [0] https://www.postgresql.org/message-id/20180601221428.GU5164%40telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 16 Aug 2021 17:28:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-16 17:42:48 -0400, �lvaro Herrera wrote:\n> On 2021-Aug-16, �lvaro Herrera wrote:\n>\n> > Here's the reversal patch for the 14 branch. (It applies cleanly to\n> > master, but the unused member of PgStat_StatTabEntry needs to be\n> > removed and catversion bumped).\n>\n> I have pushed this to both branches. (I did not remove the item from\n> the release notes in the 14 branch.)\n>\n> It upsets me to have reverted it, but after spending so much time trying\n> to correct the problems, I believe it just wasn't salvageable within the\n> beta-period code freeze constraints.\n\n:(\n\n\n> I described the issues I ran into\n> in earlier messages; I think a good starting point to re-develop this is\n> to revert the reversal commit, then apply my patch at\n> https://postgr.es/m/0794d7ca-5183-486b-9c5e-6d434867cecd@www.fastmail.com\n> then do something about the remaining problems that were complained\n> about. (Maybe: add an \"ancestor OID\" member to PgStat_StatTabEntry so\n> that the collector knows to propagate counts from children to ancestors\n> when the upd/ins/del counts are received.\n\nMy suspicion is that it'd be a lot easier to implement this efficiently if\nthere were no propagation done outside of actually analyzing tables. I.e. have\ndo_autovacuum() build a hashtable of (parent_table_id, count) and use that to\nmake the analyze decisions. And then only propagate up the costs to parents of\ntables when a child is analyzed (and thus looses its changes_since_analyze)\nvalue. Then we can use hashtable_value + changes_since_analyze for\npartitioning decisions of partitioned tables.\n\nI've prototyped this, and it does seem to make do_autovacuum() cheaper. I've\nattached that prototype, but note it's in a rough state.\n\nHowever, unless we change the way inheritance parents are stored, it still\nrequires repetitive get_partition_ancestors() (or get_partition_parent())\ncalls in do_autovacuum(), which I think is problematic due to the index scans\nyou pointed out as well. The obvious way to address that would be to store\nparent oids in pg_class - I suspect duplicating parents in pg_class is the\nbest way out, but pretty it is not.\n\n\n> However, consider developing it as follow-up to Horiguchi-san's shmem\n> pgstat rather than current pgstat implementation.)\n\n+1\n\n\nIt might be worth to first tackle reusing samples from a relation's children\nwhen building inheritance stats. Either by storing the samples somewhere (not\ncheap) and reusing them, or by at least updating a partition's stats when\nanalyzing the parent.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 17 Aug 2021 03:49:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-16 13:13:55 -0400, �lvaro Herrera wrote:\n> Another possible problem is that before the revert, we accept\n> ALTER TABLE some_partitioned_table SET (autovacuum_enabled=on/off);\n> (also autovacuum_analyze_scale_factor and autovacuum_analyze_threshold)\n> but after the revert this is will throw a syntax error. What do people\n> think we should do about that?\n> \n> 1. Do nothing. If somebody finds in that situation, they can use\n> ALTER TABLE .. RESET ...\n> to remove the settings.\n> \n> 2. Silently accept the option and do nothing.\n> 3. Accept the option and throw a warning that it's a no-op.\n> 4. Something else\n\n1) seems OK to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Aug 2021 03:50:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 05:28:10PM -0500, Justin Pryzby wrote:\n> On Mon, Aug 16, 2021 at 05:42:48PM -0400, �lvaro Herrera wrote:\n> > On 2021-Aug-16, �lvaro Herrera wrote:\n> > \n> > > Here's the reversal patch for the 14 branch. (It applies cleanly to\n> > > master, but the unused member of PgStat_StatTabEntry needs to be\n> > > removed and catversion bumped).\n> > \n> > I have pushed this to both branches. (I did not remove the item from\n> > the release notes in the 14 branch.)\n> \n> | I retained the addition of relkind 'p' to tables included by\n> | pg_stat_user_tables, because reverting that would require a catversion\n> | bump.\n> \n> Right now, on v15dev, it shows 0, which is misleading.\n> Shouldn't it be null ?\n> \n> analyze_count | 0\n> \n> Note that having analyze_count and last_analyze would be an an independently\n> useful change. Since parent tables aren't analyzed automatically, I have a\n> script to periodically process them if they weren't processed recently. Right\n> now, for partitioned tables, the best I could find is to check its partitions:\n> | MIN(last_analyzed) FROM pg_stat_all_tables psat JOIN pg_inherits i ON psat.relid=i.inhrelid\n> \n> In 20200418050815.GE26953@telsasoft.com I wrote:\n> |This patch includes partitioned tables in pg_stat_*_tables, which is great; I\n> |complained awhile ago that they were missing [0]. It might be useful if that\n> |part was split out into a separate 0001 patch (?).\n> | [0] https://www.postgresql.org/message-id/20180601221428.GU5164%40telsasoft.com\n\nI suggest the attached (which partially reverts the revert), to allow showing\ncorrect data for analyze_count and last_analyzed.\n\nArguably these should be reported as null in v14 for partitioned tables, since\nthey're not \"known to be zero\", but rather \"currently unpopulated\".\n\nn_mod_since_analyze | 0\nn_ins_since_vacuum | 0\n\nJustin",
"msg_date": "Tue, 17 Aug 2021 06:30:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Tue, Aug 17, 2021 at 06:30:18AM -0500, Justin Pryzby wrote:\n> On Mon, Aug 16, 2021 at 05:28:10PM -0500, Justin Pryzby wrote:\n> > On Mon, Aug 16, 2021 at 05:42:48PM -0400, �lvaro Herrera wrote:\n> > > On 2021-Aug-16, �lvaro Herrera wrote:\n> > > \n> > > > Here's the reversal patch for the 14 branch. (It applies cleanly to\n> > > > master, but the unused member of PgStat_StatTabEntry needs to be\n> > > > removed and catversion bumped).\n> > > \n> > > I have pushed this to both branches. (I did not remove the item from\n> > > the release notes in the 14 branch.)\n> > \n> > | I retained the addition of relkind 'p' to tables included by\n> > | pg_stat_user_tables, because reverting that would require a catversion\n> > | bump.\n> > \n> > Right now, on v15dev, it shows 0, which is misleading.\n> > Shouldn't it be null ?\n> > \n> > analyze_count | 0\n> > \n> > Note that having analyze_count and last_analyze would be an an independently\n> > useful change. Since parent tables aren't analyzed automatically, I have a\n> > script to periodically process them if they weren't processed recently. Right\n> > now, for partitioned tables, the best I could find is to check its partitions:\n> > | MIN(last_analyzed) FROM pg_stat_all_tables psat JOIN pg_inherits i ON psat.relid=i.inhrelid\n> > \n> > In 20200418050815.GE26953@telsasoft.com I wrote:\n> > |This patch includes partitioned tables in pg_stat_*_tables, which is great; I\n> > |complained awhile ago that they were missing [0]. It might be useful if that\n> > |part was split out into a separate 0001 patch (?).\n> > | [0] https://www.postgresql.org/message-id/20180601221428.GU5164%40telsasoft.com\n> \n> I suggest the attached (which partially reverts the revert), to allow showing\n> correct data for analyze_count and last_analyzed.\n\n�lvaro, would you comment on this ?\n\nTo me this could be an open item, but someone else should make that\ndetermination.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 20 Aug 2021 07:55:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 07:55:13AM -0500, Justin Pryzby wrote:\n> On Tue, Aug 17, 2021 at 06:30:18AM -0500, Justin Pryzby wrote:\n> > On Mon, Aug 16, 2021 at 05:28:10PM -0500, Justin Pryzby wrote:\n> > > On Mon, Aug 16, 2021 at 05:42:48PM -0400, �lvaro Herrera wrote:\n> > > > On 2021-Aug-16, �lvaro Herrera wrote:\n> > > > \n> > > > > Here's the reversal patch for the 14 branch. (It applies cleanly to\n> > > > > master, but the unused member of PgStat_StatTabEntry needs to be\n> > > > > removed and catversion bumped).\n> > > > \n> > > > I have pushed this to both branches. (I did not remove the item from\n> > > > the release notes in the 14 branch.)\n> > > \n> > > | I retained the addition of relkind 'p' to tables included by\n> > > | pg_stat_user_tables, because reverting that would require a catversion\n> > > | bump.\n> > > \n> > > Right now, on v15dev, it shows 0, which is misleading.\n> > > Shouldn't it be null ?\n> > > \n> > > analyze_count | 0\n> > > \n> > > Note that having analyze_count and last_analyze would be an an independently\n> > > useful change. Since parent tables aren't analyzed automatically, I have a\n> > > script to periodically process them if they weren't processed recently. Right\n> > > now, for partitioned tables, the best I could find is to check its partitions:\n> > > | MIN(last_analyzed) FROM pg_stat_all_tables psat JOIN pg_inherits i ON psat.relid=i.inhrelid\n> > > \n> > > In 20200418050815.GE26953@telsasoft.com I wrote:\n> > > |This patch includes partitioned tables in pg_stat_*_tables, which is great; I\n> > > |complained awhile ago that they were missing [0]. It might be useful if that\n> > > |part was split out into a separate 0001 patch (?).\n> > > | [0] https://www.postgresql.org/message-id/20180601221428.GU5164%40telsasoft.com\n> > \n> > I suggest the attached (which partially reverts the revert), to allow showing\n> > correct data for analyze_count and last_analyzed.\n> \n> �lvaro, would you comment on this ?\n> \n> To me this could be an open item, but someone else should make that\n> determination.\n\nI added an opened item until this is discussed.\n| pg_stats includes partitioned tables, but always shows analyze_count=0\n| Owner: Alvaro Herrera\n\nPossible solutions, in decreasing order of my own preference:\n\n - partially revert the revert, as proposed, to have \"analyze_count\" and\n \"last_analyzed\" work properly for partitioned tables. This doesn't suffer\n from any of the problems that led to the revert, does it ?\n\n - Update the .c code to return analyze_count=NULL for partitioned tables.\n\n - Update the catalog definition to exclude partitioned tables, again.\n Requires a catalog bumped.\n\n - Document that analyze_count=NULL for partitioned tables. It seems to just\n document a misbehavior.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Aug 2021 14:29:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
},
{
"msg_contents": "On 2021-Aug-17, Justin Pryzby wrote:\n\n> I suggest the attached (which partially reverts the revert), to allow showing\n> correct data for analyze_count and last_analyzed.\n\nYeah, that makes sense and my keeping of the pg_stat_all_tables entries\nseems pretty useless without this change. I have pushed a slightly\nmodified version of this to 14 and master.\n\n> Arguably these should be reported as null in v14 for partitioned tables, since\n> they're not \"known to be zero\", but rather \"currently unpopulated\".\n> \n> n_mod_since_analyze | 0\n> n_ins_since_vacuum | 0\n\nI don't disagree, but it's not easy to implement this at present. I\nthink almost all counters should be nulls for partitioned tables. For\nsome of them one could make a case that it'd be more convenient to\npropagate numbers up from partitions.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 28 Aug 2021 16:03:41 -0400",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum on partitioned table (autoanalyze)"
}
] |
[
{
"msg_contents": "Currently, configure puts the configure args into the makefiles and\nthen have the makefiles pass them to the build of pg_config. That looks \nlike an unnecessary redirection, and indeed that method was\nput in place when pg_config was a shell script. We can simplify that\nby having configure put the value into pg_config.h directly. This\nalso makes the standard build system match how the MSVC build system\nalready does it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 2 Dec 2019 13:27:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Simplify passing of configure arguments to pg_config"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Currently, configure puts the configure args into the makefiles and\n> then have the makefiles pass them to the build of pg_config. That looks \n> like an unnecessary redirection, and indeed that method was\n> put in place when pg_config was a shell script. We can simplify that\n> by having configure put the value into pg_config.h directly. This\n> also makes the standard build system match how the MSVC build system\n> already does it.\n\nI dunno, is this really an improvement? It makes the handling of\nVAL_CONFIGURE different from every other one of the values passed\ninto pg_config, and I don't see any countervailing addition of\nsome other regularity. I'm also a bit suspicious of the ad-hoc\nescaping step ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Dec 2019 00:03:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Simplify passing of configure arguments to pg_config"
},
{
"msg_contents": "On 2019-12-03 06:03, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Currently, configure puts the configure args into the makefiles and\n>> then have the makefiles pass them to the build of pg_config. That looks\n>> like an unnecessary redirection, and indeed that method was\n>> put in place when pg_config was a shell script. We can simplify that\n>> by having configure put the value into pg_config.h directly. This\n>> also makes the standard build system match how the MSVC build system\n>> already does it.\n> \n> I dunno, is this really an improvement? It makes the handling of\n> VAL_CONFIGURE different from every other one of the values passed\n> into pg_config, and I don't see any countervailing addition of\n> some other regularity.\n\nThe other values come from the makefiles, so we have to do it that way. \nThe configure args come from configure, so why make them go through the \nmakefile? (PG_VERSION also comes in that way. ;-) )\n\nThere is also the weird difference with how the MSVC build system \nhandles it. It appends VAL_CONFIGURE to pg_config.h instead of passing \nit on the command line.\n\n> I'm also a bit suspicious of the ad-hoc escaping step ...\n\nHmm, the current way doesn't handle embedded quotes at all, so perhaps \nthis wouldn't be necessary. But it would add some robustness.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 11:30:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify passing of configure arguments to pg_config"
},
{
"msg_contents": "On 2019-12-04 11:30, Peter Eisentraut wrote:\n> On 2019-12-03 06:03, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Currently, configure puts the configure args into the makefiles and\n>>> then have the makefiles pass them to the build of pg_config. That looks\n>>> like an unnecessary redirection, and indeed that method was\n>>> put in place when pg_config was a shell script. We can simplify that\n>>> by having configure put the value into pg_config.h directly. This\n>>> also makes the standard build system match how the MSVC build system\n>>> already does it.\n>>\n>> I dunno, is this really an improvement? It makes the handling of\n>> VAL_CONFIGURE different from every other one of the values passed\n>> into pg_config, and I don't see any countervailing addition of\n>> some other regularity.\n> \n> The other values come from the makefiles, so we have to do it that way.\n> The configure args come from configure, so why make them go through the\n> makefile? (PG_VERSION also comes in that way. ;-) )\n> \n> There is also the weird difference with how the MSVC build system\n> handles it. It appends VAL_CONFIGURE to pg_config.h instead of passing\n> it on the command line.\n\nHere is an updated version of the patch after the removal of \npg_config.h.win32. It's easier to see now how this helps unify the \nhandling of this between the two build systems.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 22 Dec 2019 14:56:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify passing of configure arguments to pg_config"
},
{
"msg_contents": "On 2019-12-22 14:56, Peter Eisentraut wrote:\n>> There is also the weird difference with how the MSVC build system\n>> handles it. It appends VAL_CONFIGURE to pg_config.h instead of passing\n>> it on the command line.\n> \n> Here is an updated version of the patch after the removal of\n> pg_config.h.win32. It's easier to see now how this helps unify the\n> handling of this between the two build systems.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Feb 2020 19:34:03 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify passing of configure arguments to pg_config"
}
] |
[
{
"msg_contents": "Folks,\n\nWhile it would be lovely if we could all plan all table partitioning\nin advance of inserting the first row, that's not the reality, and has\nlittle prospect of becoming the reality. One reality I'm dealing with\nis where people discover the list partitions they need to add based on\nthe contents of the table. This is common in cases of multi-tenancy\nwhere (as usual) tenant row counts are far from uniform.\n\nI'd like to make it easier to add a partition to an existing table\nwhen a default partition (or others, but let's keep this scope\nreasonable in size) already has lots of rows in it, some of which\nwould need to move to the new partition. At the moment, this operation\nfails with an error message that's not super helpful in moving\nforward.\n\nTo deal with this situation, I'd like to make some proposals for\nfunctionality that would fix this\n\n1.\n\n CREATE TABLE ... PARTITION OF ... NOT VALID/\n ALTER TABLE ... ATTACH PARTITION ... NOT VALID\n\nThis would create (or attach) a partition, immediately start routing\nnew and updated tuples to it, and would ignore any matching tuples in\nthe default partition for the purposes of completing. Obviously, this\nsituation would require manual cleanup steps. It would also require an\noption along the lines of a new command:\n\n ALTER TABLE ... VALIDATE.\n\nThis would scan the default partition with all the attendant \"fun.\"\n\n1a. Add an option along the lines of ALTER TABLE ... VALIDATE FORCE\n\nWith FORCE, it would simply mark the partition as valid in the\ncatalog, allowing for the possibility of pilot error, but requiring\nonly a brief lock.\n\n2.\n CREATE TABLE CONCURRENTLY ... PARTITION OF .../\n ALTER TABLE CONCURRENTLY ... ATTACH PARTITION ... \n\nThis would also make an \"INVALID\" partition as above, move the tuples\nin the background, and set it to VALID upon completion of the task, as\nCREATE INDEX CONCURRENTLY does now.\n\n2a. Make CREATE TABLE ... PARTITION OF / ALTER TABLE ... ATTACH\nPARTITION have the above behavior.\n\n3. Unmodified commands that do the tuple-moving themselves while\nholding an AEL. I'm pretty sure this one's a non-starter, but I wanted\nto mention it for completeness.\n\nWhat say?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 2 Dec 2019 16:20:51 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Adding a Partition in Painful"
}
] |
[
{
"msg_contents": "I see from the buildfarm status page that since commits 6b802cfc7\net al went in a week ago, frogmouth and currawong have failed that\nnew test case every time, with the symptom\n\n================== pgsql.build/src/test/isolation/regression.diffs ===================\n*** c:/prog/bf/root/REL_10_STABLE/pgsql.build/src/test/isolation/expected/async-notify.out\tMon Nov 25 00:30:49 2019\n--- c:/prog/bf/root/REL_10_STABLE/pgsql.build/src/test/isolation/results/async-notify.out\tMon Dec 2 00:54:26 2019\n***************\n*** 93,99 ****\n step llisten: LISTEN c1; LISTEN c2;\n step lcommit: COMMIT;\n step l2commit: COMMIT;\n- listener2: NOTIFY \"c1\" with payload \"\" from notifier\n step l2stop: UNLISTEN *;\n \n starting permutation: llisten lbegin usage bignotify usage\n--- 93,98 ----\n\n(Note that these two critters don't run branches v11 and up, which\nis why they're only showing this failure in 10 and 9.6.)\n\ndrongo showed the same failure once in v10, and fairywren showed\nit once in v12. Every other buildfarm animal seems happy.\n\nI'm a little baffled as to what this might be --- some sort of\ntiming problem in our Windows signal emulation, perhaps? But\nif so, why haven't we found it years ago?\n\nI don't have any ability to test this myself, so would appreciate\nhelp or ideas.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Dec 2019 11:23:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "\nOn 12/2/19 11:23 AM, Tom Lane wrote:\n> I see from the buildfarm status page that since commits 6b802cfc7\n> et al went in a week ago, frogmouth and currawong have failed that\n> new test case every time, with the symptom\n>\n> ================== pgsql.build/src/test/isolation/regression.diffs ===================\n> *** c:/prog/bf/root/REL_10_STABLE/pgsql.build/src/test/isolation/expected/async-notify.out\tMon Nov 25 00:30:49 2019\n> --- c:/prog/bf/root/REL_10_STABLE/pgsql.build/src/test/isolation/results/async-notify.out\tMon Dec 2 00:54:26 2019\n> ***************\n> *** 93,99 ****\n> step llisten: LISTEN c1; LISTEN c2;\n> step lcommit: COMMIT;\n> step l2commit: COMMIT;\n> - listener2: NOTIFY \"c1\" with payload \"\" from notifier\n> step l2stop: UNLISTEN *;\n> \n> starting permutation: llisten lbegin usage bignotify usage\n> --- 93,98 ----\n>\n> (Note that these two critters don't run branches v11 and up, which\n> is why they're only showing this failure in 10 and 9.6.)\n>\n> drongo showed the same failure once in v10, and fairywren showed\n> it once in v12. Every other buildfarm animal seems happy.\n>\n> I'm a little baffled as to what this might be --- some sort of\n> timing problem in our Windows signal emulation, perhaps? But\n> if so, why haven't we found it years ago?\n>\n> I don't have any ability to test this myself, so would appreciate\n> help or ideas.\n\n\n\nI can test things, but I don't really know what to test. FYI frogmouth\nand currawong run on virtualized XP. drongo anf fairywrne run on\nvirtualized WS2019. Neither VM is heavily resourced.\n\n\nNote that my other windows critters jacana and bowerbird which run on\nbare metal W10Pro haven't seen this problem.\n\n\n(BTW, from next week I'm going to be down under for 3 months, and my\nability to test Windows things will be somewhat reduced.)\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 2 Dec 2019 14:42:22 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "\n\nOn 12/2/19 11:42 AM, Andrew Dunstan wrote:\n> \n> On 12/2/19 11:23 AM, Tom Lane wrote:\n>> I see from the buildfarm status page that since commits 6b802cfc7\n>> et al went in a week ago, frogmouth and currawong have failed that\n>> new test case every time, with the symptom\n>>\n>> ================== pgsql.build/src/test/isolation/regression.diffs ===================\n>> *** c:/prog/bf/root/REL_10_STABLE/pgsql.build/src/test/isolation/expected/async-notify.out\tMon Nov 25 00:30:49 2019\n>> --- c:/prog/bf/root/REL_10_STABLE/pgsql.build/src/test/isolation/results/async-notify.out\tMon Dec 2 00:54:26 2019\n>> ***************\n>> *** 93,99 ****\n>> step llisten: LISTEN c1; LISTEN c2;\n>> step lcommit: COMMIT;\n>> step l2commit: COMMIT;\n>> - listener2: NOTIFY \"c1\" with payload \"\" from notifier\n>> step l2stop: UNLISTEN *;\n>> \n>> starting permutation: llisten lbegin usage bignotify usage\n>> --- 93,98 ----\n>>\n>> (Note that these two critters don't run branches v11 and up, which\n>> is why they're only showing this failure in 10 and 9.6.)\n>>\n>> drongo showed the same failure once in v10, and fairywren showed\n>> it once in v12. Every other buildfarm animal seems happy.\n>>\n>> I'm a little baffled as to what this might be --- some sort of\n>> timing problem in our Windows signal emulation, perhaps? But\n>> if so, why haven't we found it years ago?\n>>\n>> I don't have any ability to test this myself, so would appreciate\n>> help or ideas.\n> \n> \n> \n> I can test things, but I don't really know what to test. FYI frogmouth\n> and currawong run on virtualized XP. drongo anf fairywrne run on\n> virtualized WS2019. Neither VM is heavily resourced.\n\nHi Andrew, if you have time you could perhaps check the\nisolation test structure itself. Like Tom, I don't have a\nWindows box to test this.\n\nI would be curious to see if there is a race condition in\nsrc/test/isolation/isolationtester.c between the loop starting\non line 820:\n\n while ((res = PQgetResult(conn)))\n {\n ...\n }\n\nand the attempt to consume input that might include NOTIFY\nmessages on line 861:\n\n PQconsumeInput(conn);\n\nIf the first loop consumes the commit message, gets no\nfurther PGresult from PQgetResult, and finishes, and execution\nproceeds to PQconsumeInput before the NOTIFY has arrived\nover the socket, there won't be anything for PQnotifies to\nreturn, and hence for try_complete_step to print before\nreturning.\n\nI'm not sure if it is possible for the commit message to\narrive before the notify message in the fashion I am describing,\nbut that's something you might easily check by having\nisolationtester sleep before PQconsumeInput on line 861.\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Tue, 3 Dec 2019 07:11:47 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 12/2/19 11:42 AM, Andrew Dunstan wrote:\n>> On 12/2/19 11:23 AM, Tom Lane wrote:\n>>> I'm a little baffled as to what this might be --- some sort of\n>>> timing problem in our Windows signal emulation, perhaps? But\n>>> if so, why haven't we found it years ago?\n\n> I would be curious to see if there is a race condition in\n> src/test/isolation/isolationtester.c between the loop starting\n> on line 820:\n> while ((res = PQgetResult(conn)))\n> {\n> ...\n> }\n> and the attempt to consume input that might include NOTIFY\n> messages on line 861:\n> PQconsumeInput(conn);\n\nIn principle, the issue should not be there, because commits\n790026972 et al should have ensured that the NOTIFY protocol\nmessage comes out before ReadyForQuery (and thus, libpq will\nabsorb it before PQgetResult will return NULL). I think the\ntiming problem --- if that's what it is --- must be on the\nbackend side; somehow the backend is not processing the\ninbound notify queue before it goes idle.\n\nHmm ... just looking at the code again, could it be that there's\nno well-placed CHECK_FOR_INTERRUPTS? Andrew, could you see if\ninjecting one in what 790026972 added to postgres.c helps?\nThat is,\n\n /*\n * Also process incoming notifies, if any. This is mostly to\n * ensure stable behavior in tests: if any notifies were\n * received during the just-finished transaction, they'll be\n * seen by the client before ReadyForQuery is.\n */\n+ CHECK_FOR_INTERRUPTS();\n if (notifyInterruptPending)\n ProcessNotifyInterrupt();\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Dec 2019 11:40:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> In principle, the issue should not be there, because commits\n> 790026972 et al should have ensured that the NOTIFY protocol\n> message comes out before ReadyForQuery (and thus, libpq will\n> absorb it before PQgetResult will return NULL). I think the\n> timing problem --- if that's what it is --- must be on the\n> backend side; somehow the backend is not processing the\n> inbound notify queue before it goes idle.\n>\n> Hmm ... just looking at the code again, could it be that there's\n> no well-placed CHECK_FOR_INTERRUPTS? Andrew, could you see if\n> injecting one in what 790026972 added to postgres.c helps?\n> That is,\n>\n> /*\n> * Also process incoming notifies, if any. This is mostly to\n> * ensure stable behavior in tests: if any notifies were\n> * received during the just-finished transaction, they'll be\n> * seen by the client before ReadyForQuery is.\n> */\n> + CHECK_FOR_INTERRUPTS();\n> if (notifyInterruptPending)\n> ProcessNotifyInterrupt();\n>\n\nI also tried to analyze this failure and it seems this is a good bet,\nbut I am also wondering why we have never seen such a timing issue in\nother somewhat similar tests. For ex., one with comment (#\nCross-backend notification delivery.). Do they have a better way of\nensuring that the notification will be received or is it purely\ncoincidental that they haven't seen such a symptom?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Dec 2019 10:31:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Dec 3, 2019 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... just looking at the code again, could it be that there's\n>> no well-placed CHECK_FOR_INTERRUPTS? Andrew, could you see if\n>> injecting one in what 790026972 added to postgres.c helps?\n\n> I also tried to analyze this failure and it seems this is a good bet,\n> but I am also wondering why we have never seen such a timing issue in\n> other somewhat similar tests. For ex., one with comment (#\n> Cross-backend notification delivery.). Do they have a better way of\n> ensuring that the notification will be received or is it purely\n> coincidental that they haven't seen such a symptom?\n\nTBH, my bet is that this *won't* fix it, but it seemed like an easy\nthing to test. For this to fix it, you'd have to suppose that we\nnever do a CHECK_FOR_INTERRUPTS during a COMMIT command, which is\nimprobable at best.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 00:12:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Wed, Dec 4, 2019 at 12:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Tue, Dec 3, 2019 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm ... just looking at the code again, could it be that there's\n> >> no well-placed CHECK_FOR_INTERRUPTS? Andrew, could you see if\n> >> injecting one in what 790026972 added to postgres.c helps?\n>\n> > I also tried to analyze this failure and it seems this is a good bet,\n> > but I am also wondering why we have never seen such a timing issue in\n> > other somewhat similar tests. For ex., one with comment (#\n> > Cross-backend notification delivery.). Do they have a better way of\n> > ensuring that the notification will be received or is it purely\n> > coincidental that they haven't seen such a symptom?\n>\n> TBH, my bet is that this *won't* fix it, but it seemed like an easy\n> thing to test. For this to fix it, you'd have to suppose that we\n> never do a CHECK_FOR_INTERRUPTS during a COMMIT command, which is\n> improbable at best.\n>\n\n\nYou win your bet. Tried this on frogmouth and it still failed.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 11:21:44 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Wed, Dec 4, 2019 at 9:51 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n> On Wed, Dec 4, 2019 at 12:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > On Tue, Dec 3, 2019 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Hmm ... just looking at the code again, could it be that there's\n> > >> no well-placed CHECK_FOR_INTERRUPTS? Andrew, could you see if\n> > >> injecting one in what 790026972 added to postgres.c helps?\n> >\n> > > I also tried to analyze this failure and it seems this is a good bet,\n> > > but I am also wondering why we have never seen such a timing issue in\n> > > other somewhat similar tests. For ex., one with comment (#\n> > > Cross-backend notification delivery.). Do they have a better way of\n> > > ensuring that the notification will be received or is it purely\n> > > coincidental that they haven't seen such a symptom?\n> >\n> > TBH, my bet is that this *won't* fix it, but it seemed like an easy\n> > thing to test. For this to fix it, you'd have to suppose that we\n> > never do a CHECK_FOR_INTERRUPTS during a COMMIT command, which is\n> > improbable at best.\n> >\n>\n>\n> You win your bet. Tried this on frogmouth and it still failed.\n>\n\nIIUC, this means that commit (step l2commit) is finishing before the\nnotify signal is reached that session. If so, can we at least confirm\nthat by adding something like select pg_sleep(1) in that step? So,\nl2commit will be: step \"l2commit\" { SELECT pg_sleep(1); COMMIT; }. I\nthink we can try by increasing sleep time as well to confirm the\nbehavior if required.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Dec 2019 15:07:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "\nOn 12/5/19 4:37 AM, Amit Kapila wrote:\n> On Wed, Dec 4, 2019 at 9:51 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> On Wed, Dec 4, 2019 at 12:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Amit Kapila <amit.kapila16@gmail.com> writes:\n>>>> On Tue, Dec 3, 2019 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>> Hmm ... just looking at the code again, could it be that there's\n>>>>> no well-placed CHECK_FOR_INTERRUPTS? Andrew, could you see if\n>>>>> injecting one in what 790026972 added to postgres.c helps?\n>>>> I also tried to analyze this failure and it seems this is a good bet,\n>>>> but I am also wondering why we have never seen such a timing issue in\n>>>> other somewhat similar tests. For ex., one with comment (#\n>>>> Cross-backend notification delivery.). Do they have a better way of\n>>>> ensuring that the notification will be received or is it purely\n>>>> coincidental that they haven't seen such a symptom?\n>>> TBH, my bet is that this *won't* fix it, but it seemed like an easy\n>>> thing to test. For this to fix it, you'd have to suppose that we\n>>> never do a CHECK_FOR_INTERRUPTS during a COMMIT command, which is\n>>> improbable at best.\n>>>\n>>\n>> You win your bet. Tried this on frogmouth and it still failed.\n>>\n> IIUC, this means that commit (step l2commit) is finishing before the\n> notify signal is reached that session. If so, can we at least confirm\n> that by adding something like select pg_sleep(1) in that step? So,\n> l2commit will be: step \"l2commit\" { SELECT pg_sleep(1); COMMIT; }. I\n> think we can try by increasing sleep time as well to confirm the\n> behavior if required.\n>\n\nYeah, with the sleep in there the NOTIFY is seen.\n\n\ncheers\n\n\nandrew\n\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 6 Dec 2019 17:11:10 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 12/5/19 4:37 AM, Amit Kapila wrote:\n>> IIUC, this means that commit (step l2commit) is finishing before the\n>> notify signal is reached that session. If so, can we at least confirm\n>> that by adding something like select pg_sleep(1) in that step? So,\n>> l2commit will be: step \"l2commit\" { SELECT pg_sleep(1); COMMIT; }. I\n>> think we can try by increasing sleep time as well to confirm the\n>> behavior if required.\n\n> Yeah, with the sleep in there the NOTIFY is seen.\n\nWell, that is *really* interesting, because I was fairly sure that\neverything was adequately interlocked. The signal must have been\nsent before step notify1 finishes, and then we do several other\nthings, so how could the listener2 process not have gotten it by\nthe time we run the l2commit step? I still think this is showing\nus some sort of deficiency in our Windows signal mechanism.\n\nA possible theory as to what's happening is that the kernel scheduler\nis discriminating against listener2's signal management thread(s)\nand not running them until everything else goes idle for a moment.\n(If true, even a very short sleep ought to be enough to fix the test.)\nIf that's what's happening, though, I think we ought to look into\nwhether we can raise the priority of the signal threads compared to\nthe main thread. I don't think we want this much variation between\nthe way signals work on Windows and the way they work elsewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Dec 2019 18:31:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Sat, Dec 7, 2019 at 5:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 12/5/19 4:37 AM, Amit Kapila wrote:\n> >> IIUC, this means that commit (step l2commit) is finishing before the\n> >> notify signal is reached that session. If so, can we at least confirm\n> >> that by adding something like select pg_sleep(1) in that step? So,\n> >> l2commit will be: step \"l2commit\" { SELECT pg_sleep(1); COMMIT; }. I\n> >> think we can try by increasing sleep time as well to confirm the\n> >> behavior if required.\n>\n> > Yeah, with the sleep in there the NOTIFY is seen.\n>\n> Well, that is *really* interesting, because I was fairly sure that\n> everything was adequately interlocked. The signal must have been\n> sent before step notify1 finishes, and then we do several other\n> things, so how could the listener2 process not have gotten it by\n> the time we run the l2commit step? I still think this is showing\n> us some sort of deficiency in our Windows signal mechanism.\n>\n> A possible theory as to what's happening is that the kernel scheduler\n> is discriminating against listener2's signal management thread(s)\n> and not running them until everything else goes idle for a moment.\n>\n\nIf we have to believe that theory then why the other similar test is\nnot showing the problem. The other test, I am talking about is below:\n# Cross-backend notification delivery.\npermutation \"llisten\" \"notify1\" \"notify2\" \"notify3\" \"notifyf\" \"lcheck\"\n\nIn this test also, one session is listening and the other session sent\na few notifications and then when the first session executes another\ncommand (lcheck), it receives the notifications. I have also debugged\nit in the Windows box that as soon as the notify sends the signal, the\nsignal thread receives it and comes out of ConnectNamedPipe and does\nthe processing to dispatch the signal.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Dec 2019 09:38:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sat, Dec 7, 2019 at 5:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A possible theory as to what's happening is that the kernel scheduler\n>> is discriminating against listener2's signal management thread(s)\n>> and not running them until everything else goes idle for a moment.\n\n> If we have to believe that theory then why the other similar test is\n> not showing the problem.\n\nThere are fewer processes involved in that case, so I don't think\nit disproves the theory that this is a scheduler glitch.\n\n> I have also debugged\n> it in the Windows box that as soon as the notify sends the signal, the\n> signal thread receives it and comes out of ConnectNamedPipe and does\n> the processing to dispatch the signal.\n\nHave you done that debugging on a machine that's showing the failure?\nSince only some of our Windows critters are showing it, it's evidently\ndependent on environment or Windows version somehow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Dec 2019 12:20:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "I wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>> On Sat, Dec 7, 2019 at 5:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> A possible theory as to what's happening is that the kernel scheduler\n>>> is discriminating against listener2's signal management thread(s)\n>>> and not running them until everything else goes idle for a moment.\n\n>> If we have to believe that theory then why the other similar test is\n>> not showing the problem.\n\n> There are fewer processes involved in that case, so I don't think\n> it disproves the theory that this is a scheduler glitch.\n\nSo, just idly looking at the code in src/backend/port/win32/signal.c\nand src/port/kill.c, I have to wonder why we have this baroque-looking\ndesign of using *two* signal management threads. And, if I'm\nreading it right, we create an entire new pipe object and an entire\nnew instance of the second thread for each incoming signal. Plus, the\nsignal senders use CallNamedPipe (hence, underneath, TransactNamedPipe)\nwhich means they in effect wait for the recipient's signal-handling\nthread to ack receipt of the signal. Maybe there's a good reason for\nall this but it sure seems like a lot of wasted cycles from here.\n\nI have to wonder why we don't have a single named pipe that lasts as\nlong as the recipient process does, and a signal sender just writes\none byte to it, and considers the signal delivered if it is able to\ndo that. The \"message\" semantics seem like overkill for that.\n\nI dug around in the contemporaneous archives and could only find\nhttps://www.postgresql.org/message-id/303E00EBDD07B943924382E153890E5434AA47%40cuthbert.rcsinc.local\nwhich describes the existing approach but fails to explain why we\nshould do it like that.\n\nThis might or might not have much to do with the immediate problem,\nbut I can't help wondering if there's some race-condition-ish behavior\nin there that's contributing to what we're seeing. We already had to\nfix a couple of race conditions from doing it like this, cf commits\n2e371183e, 04a4413c2, f27a4696f. Perhaps 0ea1f2a3a is relevant\nas well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Dec 2019 14:56:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 1:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> So, just idly looking at the code in src/backend/port/win32/signal.c\n> and src/port/kill.c, I have to wonder why we have this baroque-looking\n> design of using *two* signal management threads. And, if I'm\n> reading it right, we create an entire new pipe object and an entire\n> new instance of the second thread for each incoming signal. Plus, the\n> signal senders use CallNamedPipe (hence, underneath, TransactNamedPipe)\n> which means they in effect wait for the recipient's signal-handling\n> thread to ack receipt of the signal. Maybe there's a good reason for\n> all this but it sure seems like a lot of wasted cycles from here.\n>\n> I have to wonder why we don't have a single named pipe that lasts as\n> long as the recipient process does, and a signal sender just writes\n> one byte to it, and considers the signal delivered if it is able to\n> do that. The \"message\" semantics seem like overkill for that.\n>\n> I dug around in the contemporaneous archives and could only find\n> https://www.postgresql.org/message-id/303E00EBDD07B943924382E153890E5434AA47%40cuthbert.rcsinc.local\n> which describes the existing approach but fails to explain why we\n> should do it like that.\n>\n> This might or might not have much to do with the immediate problem,\n> but I can't help wondering if there's some race-condition-ish behavior\n> in there that's contributing to what we're seeing.\n>\n\nOn the receiving side, the work we do after the 'notify' is finished\n(or before CallNamedPipe gets control back) is as follows:\n\npg_signal_dispatch_thread()\n{\n..\nFlushFileBuffers(pipe);\nDisconnectNamedPipe(pipe);\nCloseHandle(pipe);\n\npg_queue_signal(sigNum);\n}\n\nIt seems most of these are the system calls which makes me think that\nthey might be slow enough on some Windows version that it could lead\nto such race condition.\n\nNow, coming back to the other theory the scheduler is not able to\nschedule these signal management threads. I think if that would be\nthe case, then notify could not have finished, because CallNamedPipe\nreturns only when dispatch thread writes back to the pipe. Now, if\nsomehow after writing back on the pipe if the scheduler kicks this\nthread out, it is possible that we see such behavior, however, I am\nnot sure if we can do anything about that.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 8 Dec 2019 10:44:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Sat, Dec 7, 2019 at 10:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sat, Dec 7, 2019 at 5:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> A possible theory as to what's happening is that the kernel scheduler\n> >> is discriminating against listener2's signal management thread(s)\n> >> and not running them until everything else goes idle for a moment.\n>\n> > If we have to believe that theory then why the other similar test is\n> > not showing the problem.\n>\n> There are fewer processes involved in that case, so I don't think\n> it disproves the theory that this is a scheduler glitch.\n>\n> > I have also debugged\n> > it in the Windows box that as soon as the notify sends the signal, the\n> > signal thread receives it and comes out of ConnectNamedPipe and does\n> > the processing to dispatch the signal.\n>\n> Have you done that debugging on a machine that's showing the failure?\n>\n\nNo, it is on my local Win-7 setup.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 8 Dec 2019 10:45:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Dec 8, 2019 at 1:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > So, just idly looking at the code in src/backend/port/win32/signal.c\n> > and src/port/kill.c, I have to wonder why we have this baroque-looking\n> > design of using *two* signal management threads. And, if I'm\n> > reading it right, we create an entire new pipe object and an entire\n> > new instance of the second thread for each incoming signal. Plus, the\n> > signal senders use CallNamedPipe (hence, underneath, TransactNamedPipe)\n> > which means they in effect wait for the recipient's signal-handling\n> > thread to ack receipt of the signal. Maybe there's a good reason for\n> > all this but it sure seems like a lot of wasted cycles from here.\n> >\n> > I have to wonder why we don't have a single named pipe that lasts as\n> > long as the recipient process does, and a signal sender just writes\n> > one byte to it, and considers the signal delivered if it is able to\n> > do that. The \"message\" semantics seem like overkill for that.\n> >\n> > I dug around in the contemporaneous archives and could only find\n> > https://www.postgresql.org/message-id/303E00EBDD07B943924382E153890E5434AA47%40cuthbert.rcsinc.local\n> > which describes the existing approach but fails to explain why we\n> > should do it like that.\n> >\n> > This might or might not have much to do with the immediate problem,\n> > but I can't help wondering if there's some race-condition-ish behavior\n> > in there that's contributing to what we're seeing.\n> >\n>\n> On the receiving side, the work we do after the 'notify' is finished\n> (or before CallNamedPipe gets control back) is as follows:\n>\n> pg_signal_dispatch_thread()\n> {\n> ..\n> FlushFileBuffers(pipe);\n> DisconnectNamedPipe(pipe);\n> CloseHandle(pipe);\n>\n> pg_queue_signal(sigNum);\n> }\n>\n> It seems most of these are the system calls which makes me think that\n> they might be slow enough on some Windows version that it could lead\n> to such race condition.\n>\n\nIIUC, once the dispatch thread has queued the signal\n(pg_queue_signal), the next CHECK_FOR_INTERRUPTS by the main thread\nwill execute the signal. So, if we move pg_queue_signal() before we\ndo WriteFile in pg_signal_dispatch_thread(), this race condition will\nbe closed. Now, we might not want to do this as that will add some\nmore time (even though very less) before notify on the other side can\nfinish or maybe there is some technical problem with this idea which I\nam not able to see immediately.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 8 Dec 2019 16:27:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> IIUC, once the dispatch thread has queued the signal\n> (pg_queue_signal), the next CHECK_FOR_INTERRUPTS by the main thread\n> will execute the signal. So, if we move pg_queue_signal() before we\n> do WriteFile in pg_signal_dispatch_thread(), this race condition will\n> be closed. Now, we might not want to do this as that will add some\n> more time (even though very less) before notify on the other side can\n> finish or maybe there is some technical problem with this idea which I\n> am not able to see immediately.\n\nHmm. Certainly worth trying to see if it resolves the failure on\nAndrew's machines.\n\nIt's not real hard to believe that TransactNamedPipe could be\n\"optimized\" so that it preferentially schedules the client thread\nonce the handshake is done, not the server thread (based on some\nheuristic claim that the former is probably an interactive process\nand the latter less so). In that situation, we'd proceed on with\nthe signal not really delivered, and there is nothing guaranteeing\nthat it will be delivered anytime soon --- the rest of the test\ncan make progress regardless of whether that thread ever gets\nscheduled again. So, as long as we've got this handshake mechanism,\nit seems like it'd be a good thing for the ack to indicate that\nthe signal was *actually* delivered (by setting the recipient's\nflag bit) and not just that it'll probably get delivered eventually.\n\nI remain a bit unsure that we actually need the handshaking business\nat all --- I doubt that Unix signals provide any guarantee of synchronous\ndelivery on most platforms. (If I'm reading the POSIX spec correctly,\nit only requires synchronous delivery when a thread signals itself.)\nBut the existence of this unsynchronized thread in the Windows\nimplementation sure seems like a dubious thing, now that you\npoint it out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Dec 2019 09:53:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "I wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>> IIUC, once the dispatch thread has queued the signal\n>> (pg_queue_signal), the next CHECK_FOR_INTERRUPTS by the main thread\n>> will execute the signal. So, if we move pg_queue_signal() before we\n>> do WriteFile in pg_signal_dispatch_thread(), this race condition will\n>> be closed. Now, we might not want to do this as that will add some\n>> more time (even though very less) before notify on the other side can\n>> finish or maybe there is some technical problem with this idea which I\n>> am not able to see immediately.\n\n> Hmm. Certainly worth trying to see if it resolves the failure on\n> Andrew's machines.\n\nFor Andrew's convenience, here's a draft patch for that. I took the\nliberty of improving the rather thin comments in this area, too.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 08 Dec 2019 11:22:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "I wrote:\n> So, just idly looking at the code in src/backend/port/win32/signal.c\n> and src/port/kill.c, I have to wonder why we have this baroque-looking\n> design of using *two* signal management threads. And, if I'm\n> reading it right, we create an entire new pipe object and an entire\n> new instance of the second thread for each incoming signal. Plus, the\n> signal senders use CallNamedPipe (hence, underneath, TransactNamedPipe)\n> which means they in effect wait for the recipient's signal-handling\n> thread to ack receipt of the signal. Maybe there's a good reason for\n> all this but it sure seems like a lot of wasted cycles from here.\n\nHere's a possible patch (untested by me) to get rid of the second thread\nand the new-pipe-for-every-request behavior. I believe that the existing\nlogic may be based on Microsoft's \"Multithreaded Pipe Server\" example [1]\nor something similar, but that's based on an assumption that servicing\na client request may take a substantial amount of time and it's worth\nhandling requests concurrently. Neither point applies in this context.\n\nDoing it like this seems attractive to me because it gets rid of two\ndifferent failure modes: inability to create a new thread and inability\nto create a new pipe handle. Now on the other hand, it means that\ninability to complete the read/write transaction with a client right\naway will delay processing of other signals. But we know that the\nclient is engaged in a CallNamedPipe operation, so how realistic is\nthat concern?\n\nThis is to be applied on top of the other patch I just sent.\n\n\t\t\tregards, tom lane\n\n[1] https://docs.microsoft.com/en-us/windows/win32/ipc/multithreaded-pipe-server",
"msg_date": "Sun, 08 Dec 2019 11:57:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "\nOn 12/8/19 11:57 AM, Tom Lane wrote:\n> I wrote:\n>> So, just idly looking at the code in src/backend/port/win32/signal.c\n>> and src/port/kill.c, I have to wonder why we have this baroque-looking\n>> design of using *two* signal management threads. And, if I'm\n>> reading it right, we create an entire new pipe object and an entire\n>> new instance of the second thread for each incoming signal. Plus, the\n>> signal senders use CallNamedPipe (hence, underneath, TransactNamedPipe)\n>> which means they in effect wait for the recipient's signal-handling\n>> thread to ack receipt of the signal. Maybe there's a good reason for\n>> all this but it sure seems like a lot of wasted cycles from here.\n> Here's a possible patch (untested by me) to get rid of the second thread\n> and the new-pipe-for-every-request behavior. I believe that the existing\n> logic may be based on Microsoft's \"Multithreaded Pipe Server\" example [1]\n> or something similar, but that's based on an assumption that servicing\n> a client request may take a substantial amount of time and it's worth\n> handling requests concurrently. Neither point applies in this context.\n>\n> Doing it like this seems attractive to me because it gets rid of two\n> different failure modes: inability to create a new thread and inability\n> to create a new pipe handle. Now on the other hand, it means that\n> inability to complete the read/write transaction with a client right\n> away will delay processing of other signals. But we know that the\n> client is engaged in a CallNamedPipe operation, so how realistic is\n> that concern?\n>\n> This is to be applied on top of the other patch I just sent.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://docs.microsoft.com/en-us/windows/win32/ipc/multithreaded-pipe-server\n\n\n\nPatch 1 fixed the problems on frogmouth.\n\n\nPatch 2 also ran without incident.\n\n\ntests run on REL_10_STABLE\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 9 Dec 2019 13:24:53 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Patch 1 fixed the problems on frogmouth.\n\nCool, thanks. I'll push that in a bit (to the back branches as well as\nHEAD).\n\n> Patch 2 also ran without incident.\n\nWhat do people think about the second patch? I'd only propose that\nfor HEAD, since it's not really a bug fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 13:31:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 10:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > So, just idly looking at the code in src/backend/port/win32/signal.c\n> > and src/port/kill.c, I have to wonder why we have this baroque-looking\n> > design of using *two* signal management threads. And, if I'm\n> > reading it right, we create an entire new pipe object and an entire\n> > new instance of the second thread for each incoming signal. Plus, the\n> > signal senders use CallNamedPipe (hence, underneath, TransactNamedPipe)\n> > which means they in effect wait for the recipient's signal-handling\n> > thread to ack receipt of the signal. Maybe there's a good reason for\n> > all this but it sure seems like a lot of wasted cycles from here.\n>\n> Here's a possible patch (untested by me) to get rid of the second thread\n> and the new-pipe-for-every-request behavior. I believe that the existing\n> logic may be based on Microsoft's \"Multithreaded Pipe Server\" example [1]\n> or something similar, but that's based on an assumption that servicing\n> a client request may take a substantial amount of time and it's worth\n> handling requests concurrently. Neither point applies in this context.\n>\n> Doing it like this seems attractive to me because it gets rid of two\n> different failure modes: inability to create a new thread and inability\n> to create a new pipe handle. Now on the other hand, it means that\n> inability to complete the read/write transaction with a client right\n> away will delay processing of other signals. But we know that the\n> client is engaged in a CallNamedPipe operation, so how realistic is\n> that concern?\n>\n\nRight, the client is engaged in a CallNamedPipe operation, but the\ncurrent mechanism can allow multiple such clients and that might lead\nto faster processing of signals. I am not sure how much practical\nadvantage we have with the current implementation over proposed\nchange, so not sure if we should just get rid of it on that grounds.\nIdeally, we can run a couple of tests to see if there is any help in\nservicing the signals with this mechanism over proposed change on\ndifferent Windows machines, but is it really worth the effort?\n\nYour patch looks good to me and I don't see much problem if you want\nto proceed with it, but I am just not sure if the current mechanism is\ncompletely bogus.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Dec 2019 12:23:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sun, Dec 8, 2019 at 10:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Doing it like this seems attractive to me because it gets rid of two\n>> different failure modes: inability to create a new thread and inability\n>> to create a new pipe handle. Now on the other hand, it means that\n>> inability to complete the read/write transaction with a client right\n>> away will delay processing of other signals. But we know that the\n>> client is engaged in a CallNamedPipe operation, so how realistic is\n>> that concern?\n\n> Right, the client is engaged in a CallNamedPipe operation, but the\n> current mechanism can allow multiple such clients and that might lead\n> to faster processing of signals.\n\nIt would only matter if multiple processes signal the same backend at the\nsame time, which seems to me to be probably a very minority use-case.\nFor the normal case of one signal arriving at a time, what I'm suggesting\nought to be noticeably faster because of fewer kernel calls. Surely\ncreating a new pipe instance and a new thread are not free.\n\nIn any case, the main thing I'm on about here is getting rid of the\nfailure modes. The existing code does have a rather lame/buggy\nworkaround for the cant-create-new-pipe case. A possible answer for\ncant-create-new-thread might be to go ahead and service the current\nrequest locally in the long-lived signal thread. But that seems like\nit's piling useless (and hard to test) complexity on top of useless\ncomplexity.\n\n> Ideally, we can run a couple of tests to see if there is any help in\n> servicing the signals with this mechanism over proposed change on\n> different Windows machines, but is it really worth the effort?\n\nThe failure modes I'm worried about are obviously pretty low-probability;\nif they were not, we'd be getting field reports about it. So I'm not\nsure how you can test your way to a conclusion about whether this is an\nimprovement. But we're not in the business of ignoring failure modes\njust because they're low-probability. I'd argue that a kernel call\nthat's not there is a kernel call that cannot fail, and therefore ipso\nfacto an improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Dec 2019 10:57:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 9:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sun, Dec 8, 2019 at 10:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Doing it like this seems attractive to me because it gets rid of two\n> >> different failure modes: inability to create a new thread and inability\n> >> to create a new pipe handle. Now on the other hand, it means that\n> >> inability to complete the read/write transaction with a client right\n> >> away will delay processing of other signals. But we know that the\n> >> client is engaged in a CallNamedPipe operation, so how realistic is\n> >> that concern?\n>\n> > Right, the client is engaged in a CallNamedPipe operation, but the\n> > current mechanism can allow multiple such clients and that might lead\n> > to faster processing of signals.\n>\n> It would only matter if multiple processes signal the same backend at the\n> same time, which seems to me to be probably a very minority use-case.\n> For the normal case of one signal arriving at a time, what I'm suggesting\n> ought to be noticeably faster because of fewer kernel calls. Surely\n> creating a new pipe instance and a new thread are not free.\n>\n> In any case, the main thing I'm on about here is getting rid of the\n> failure modes. The existing code does have a rather lame/buggy\n> workaround for the cant-create-new-pipe case. A possible answer for\n> cant-create-new-thread might be to go ahead and service the current\n> request locally in the long-lived signal thread. But that seems like\n> it's piling useless (and hard to test) complexity on top of useless\n> complexity.\n>\n\nI am convinced by your points. So +1 for your proposed patch. I have\nalready reviewed it yesterday and it appears fine to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Dec 2019 11:18:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I am convinced by your points. So +1 for your proposed patch. I have\n> already reviewed it yesterday and it appears fine to me.\n\nOK, pushed. Thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Dec 2019 15:10:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 9:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > I am convinced by your points. So +1 for your proposed patch. I have\n> > already reviewed it yesterday and it appears fine to me.\n>\n> OK, pushed. Thanks for reviewing!\n\nI made a thing to watch out for low probability BF failures and it\ntold me that a similar failure in async-notify might have reappeared\non brolga:\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga&dt=2020-07-15%2008:30:11\n| REL_10_STABLE\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga&dt=2020-05-21%2009:17:13\n| REL9_6_STABLE\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga&dt=2020-04-22%2009:13:38\n| REL9_6_STABLE\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga&dt=2020-04-05%2009:38:13\n| REL9_6_STABLE\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga&dt=2020-04-03%2021:17:39\n| REL9_6_STABLE\n\n\n",
"msg_date": "Sun, 30 Aug 2020 14:52:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I made a thing to watch out for low probability BF failures and it\n> told me that a similar failure in async-notify might have reappeared\n> on brolga:\n\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga&dt=2020-07-15%2008:30:11\n> | REL_10_STABLE\n> [ etc ]\n\nHm, interesting. None of these examples show an actual *failure* to\nreceive a notification, unlike the example that began this thread.\nSo it seems unlikely that back-patching 16114f2ea would help. What\nwe are seeing here, instead, is delayed timing of notify receipt(s).\nI suspect that this is a variant of the issue described over here:\n\nhttps://www.postgresql.org/message-id/flat/2527507.1598237598%40sss.pgh.pa.us\n\nI didn't have a great idea about how to fix it reliably in\ninsert-conflict-specconflict, and I lack one here too :-(.\n\nIt's interesting though that your examples are all in v10 or older.\nCould we have done something that indirectly fixes the problem\nsince then? Or is that just chance?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 29 Aug 2020 23:21:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Windows buildfarm members vs. new async-notify isolation test"
}
] |
[
{
"msg_contents": "Hi.\n\nI need to check database's age in psql, so I want to append that.\n\n\nlike this,\n\n\n(12) [postgres@ioseph-centos7 ~]$ psql -E\npsql (12.0)\nType \"help\" for help.\n\n(12.0) postgres@postgres=# \\l\n********* QUERY **********\nSELECT d.datname as \"Name\",\n pg_catalog.pg_get_userbyid(d.datdba) as \"Owner\",\n pg_catalog.pg_encoding_to_char(d.encoding) as \"Encoding\",\n d.datcollate as \"Collate\",\n d.datctype as \"Ctype\",\n pg_catalog.array_to_string(d.datacl, E'\\n') AS \"Access privileges\"\nFROM pg_catalog.pg_database d\nORDER BY 1;\n**************************\n\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access \nprivileges\n-----------+----------+----------+---------+-------------+-----------------------\n postgres | postgres | UTF8 | C | ko_KR.UTF-8 |\n template0 | postgres | UTF8 | C | ko_KR.UTF-8 | \n=c/postgres +\n | | | | | postgres=CTc/postgres\n template1 | postgres | UTF8 | C | ko_KR.UTF-8 | \n=c/postgres +\n | | | | | postgres=CTc/postgres\n(3 rows)\n\n(12.0) postgres@postgres=# \\l+\n********* QUERY **********\nSELECT d.datname as \"Name\",\n pg_catalog.pg_get_userbyid(d.datdba) as \"Owner\",\n pg_catalog.pg_encoding_to_char(d.encoding) as \"Encoding\",\n d.datcollate as \"Collate\",\n d.datctype as \"Ctype\",\n pg_catalog.array_to_string(d.datacl, E'\\n') AS \"Access privileges\",\n CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')\n THEN \npg_catalog.pg_size_pretty(pg_catalog.pg_database_size(d.datname))\n ELSE 'No Access'\n END as \"Size\",\n age(d.datfrozenxid) as \"Age\",\n t.spcname as \"Tablespace\",\n pg_catalog.shobj_description(d.oid, 'pg_database') as \"Description\"\nFROM pg_catalog.pg_database d\n JOIN pg_catalog.pg_tablespace t on d.dattablespace = t.oid\nORDER BY 1;\n**************************\n\nList of databases\n Name | Owner | Encoding | Collate | Ctype | Access \nprivileges | Size | Age | Tablespace | Description\n-----------+----------+----------+---------+-------------+-----------------------+---------+-----+------------+--------------------------------------------\n postgres | postgres | UTF8 | C | ko_KR.UTF-8 \n| | 8201 kB | 26 | pg_default | default \nadministrative connection database\n template0 | postgres | UTF8 | C | ko_KR.UTF-8 | \n=c/postgres +| 8049 kB | 26 | pg_default | unmodifiable empty \ndatabase\n | | | | | postgres=CTc/postgres \n| | | |\n template1 | postgres | UTF8 | C | ko_KR.UTF-8 | \n=c/postgres +| 8049 kB | 26 | pg_default | default template \nfor new databases\n | | | | | postgres=CTc/postgres \n| | | |\n(3 rows)\n\n--- patch content\n\n--- bin/psql/describe.c.orig 2019-12-03 11:42:22.628743691 +0900\n+++ bin/psql/describe.c 2019-12-03 11:36:21.652722682 +0900\n@@ -890,6 +890,10 @@\n gettext_noop(\"Size\"));\n if (verbose && pset.sversion >= 80000)\n appendPQExpBuffer(&buf,\n+ \",\\n age(d.datfrozenxid) as \\\"%s\\\"\",\n+ gettext_noop(\"Age\"));\n+ if (verbose && pset.sversion >= 80000)\n+ appendPQExpBuffer(&buf,\n \",\\n t.spcname as \\\"%s\\\"\",\n gettext_noop(\"Tablespace\"));\n if (verbose && pset.sversion >= 80200)\n\n\n-----\n\nPlease review it.\n\n\nRegards ioseph\n\n\n\n\n",
"msg_date": "Tue, 3 Dec 2019 11:51:55 +0900",
"msg_from": "Ioseph Kim <pgsql-kr@postgresql.kr>",
"msg_from_op": true,
"msg_subject": "add age for databases list in psql"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI hit an error when updating a view with conditional INSTEAD OF rules, the\nreproduce steps are list below:\n\nCREATE TABLE t1(a int, b int);\n\nCREATE TABLE t2(a int, b int);\n\nCREATE VIEW v1 AS SELECT * FROM t1 where b > 100;\n\nINSERT INTO v1 values(1, 110);\n\nSELECT * FROM t1;\n\n\nCREATE OR REPLACE rule r1 AS\n\nON UPDATE TO v1\n\nWHERE old.a > new.b\n\nDO INSTEAD (\n\nINSERT INTO t2 values(old.a, old.b);\n\n);\n\n\nUPDATE v1 SET b = 2 WHERE a = 1;\n\n*ERROR: no relation entry for relid 2*\n\n\nWith some hacks, It is because, for conditional INSTEAD OF rules\n conditional, the original UPDATE operation also need to perform on the\nview, however, we didn't rewrite the target view for any view with INSTEAD\nrules.\n\nThere should be only two cases that you can skip the rewrite of target view:\n1) the view has INSTEAD OF triggers on the operations, the operations will\nbe replaced by trigger-defined\n2) the view has INSTEAD OF rules and it is non conditional rules, the\noperations will be replaced by actions.\n\nIt should be a typo in commit a99c42f291421572aef2, there is a description\nin documents:\n \"There is a catch if you try to use conditional rules\n for complex view updates: there must be an unconditional\n INSTEAD rule for each action you wish to allow on the view.\"\n\nCommit a99c42f291421572aef2 explicitly change the description that the\nrestriction only applies to complex view, conditional INSTEAD rule should\nwork for a simple view.\n\nI attached a patch to fix it, please take a look,\n\nThanks,\nPengzhou",
"msg_date": "Tue, 3 Dec 2019 19:05:34 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "On Tue, 3 Dec 2019 at 11:06, Pengzhou Tang <ptang@pivotal.io> wrote:\n>\n> Hi Hackers,\n>\n> I hit an error when updating a view with conditional INSTEAD OF rules, the reproduce steps are list below:\n>\n> CREATE TABLE t1(a int, b int);\n> CREATE TABLE t2(a int, b int);\n> CREATE VIEW v1 AS SELECT * FROM t1 where b > 100;\n>\n> INSERT INTO v1 values(1, 110);\n> SELECT * FROM t1;\n>\n> CREATE OR REPLACE rule r1 AS\n> ON UPDATE TO v1\n> WHERE old.a > new.b\n> DO INSTEAD (\n> INSERT INTO t2 values(old.a, old.b);\n> );\n>\n> UPDATE v1 SET b = 2 WHERE a = 1;\n>\n> ERROR: no relation entry for relid 2\n>\n\nI took a look at this and one thing that's clear is that it should not\nbe producing that error. Testing that case in 9.3, where updatable\nviews were first added, it produces the expected error:\n\nERROR: cannot update view \"v1\"\nHINT: To enable updating the view, provide an INSTEAD OF UPDATE\ntrigger or an unconditional ON UPDATE DO INSTEAD rule.\n\nThat is the intended behaviour -- see [1] and the discussion that\nfollowed. Basically the presence of INSTEAD triggers or INSTEAD rules\n(conditional or otherwise) disables auto-updates. If you have any\nconditional INSTEAD rules, you must also have an unconditional INSTEAD\nrule or INSTEAD OF trigger to make the view updatable.\n\nSo what's curious is why this test case now produces this rather\nuninformative error:\n\nERROR: no relation entry for relid 2\n\nwhich really shouldn't be happening.\n\nTracing it through, this seems to be a result of\ncab5dc5daf2f6f5da0ce79deb399633b4bb443b5 which added support for\nupdatable views with a mix of updatable and non-updatable columns.\nThat included a change to rewriteTargetListIU() to prevent it from\nadding dummy targetlist entries for unassigned-to attributes for\nauto-updatable views, in case they are no longer simple references to\nthe underlying relation. Instead, that is left to expand_targetlist(),\nas for a normal table. However, in this case (an UPDATE on a view with\na conditional rule), the target relation of the original query isn't\nrewritten (we leave it to the executor to report the error), and so\nexpand_targetlist() ends up adding a new targetlist entry that\nreferences the target relation, which is still the original view. But\nwhen the planner bulds the simple_rel_array, it only adds entries for\nrelations referenced in the query's jointree, which only includes the\nbase table by this point, not the view. Thus the new targetlist entry\nadded by expand_targetlist() refers to a NULL slot in the\nsimple_rel_array, and it blows up.\n\nGiven that this is a query that's going to fail anyway, I'm inclined\nto think that the right thing to do is to throw the error sooner, in\nrewriteQuery(), rather than attempting to plan a query that cannot be\nexecuted.\n\nThoughts?\n\nRegards,\nDean\n\n\n[1] https://www.postgresql.org/message-id/25777.1352325888%40sss.pgh.pa.us\n\n\n",
"msg_date": "Sat, 4 Jan 2020 16:08:19 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> That included a change to rewriteTargetListIU() to prevent it from\n> adding dummy targetlist entries for unassigned-to attributes for\n> auto-updatable views, in case they are no longer simple references to\n> the underlying relation. Instead, that is left to expand_targetlist(),\n> as for a normal table. However, in this case (an UPDATE on a view with\n> a conditional rule), the target relation of the original query isn't\n> rewritten (we leave it to the executor to report the error), and so\n> expand_targetlist() ends up adding a new targetlist entry that\n> references the target relation, which is still the original view.\n\nSo why did we leave it to the executor to throw an error? I have\na feeling it was either because the rewriter didn't have (easy?)\naccess to the info, or it seemed like it'd be duplicating code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 12:13:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "On Sat, 4 Jan 2020 at 17:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > That included a change to rewriteTargetListIU() to prevent it from\n> > adding dummy targetlist entries for unassigned-to attributes for\n> > auto-updatable views, in case they are no longer simple references to\n> > the underlying relation. Instead, that is left to expand_targetlist(),\n> > as for a normal table. However, in this case (an UPDATE on a view with\n> > a conditional rule), the target relation of the original query isn't\n> > rewritten (we leave it to the executor to report the error), and so\n> > expand_targetlist() ends up adding a new targetlist entry that\n> > references the target relation, which is still the original view.\n>\n> So why did we leave it to the executor to throw an error? I have\n> a feeling it was either because the rewriter didn't have (easy?)\n> access to the info, or it seemed like it'd be duplicating code.\n>\n\nPerhaps it was more to do with history and not wanting to duplicate\ncode. Before we had auto-updatable views, it was always the executor\nthat threw this error. With the addition of auto-updatable views, we\nalso throw the error from rewriteTargetView() if there are no rules or\ntriggers. But there is a difference -- rewriteTargetView() has more\ndetailed information about why the view isn't auto-updatable, which it\nincludes in the error detail.\n\nI think that the required information is easily available in the\nrewriter though. Currently RewriteQuery() is doing this:\n\n if ( !instead // No unconditional INSTEAD rules\n && qual_product == NULL // No conditional INSTEAD rules either\n && relkind == VIEW\n && !view_has_instead_trigger() )\n {\n // Attempt auto-update, throwing an error if not possible\n rewriteTargetView(...)\n ...\n }\n\nSo if that were to become something like:\n\n if ( !instead // No unconditional INSTEAD rules\n && relkind == VIEW\n && !view_has_instead_trigger() )\n {\n if (qual_product != NULL)\n {\n // Conditional INSTEAD rules exist, but no unconditional INSTEAD rules\n // or INSTEAD OF triggers, so throw an error\n ...\n }\n\n // Attempt auto-update, throwing an error if not possible\n rewriteTargetView(...)\n ...\n }\n\nthen in theory I think the error condition in the executor should\nnever be triggered. That will lead to a few lines of duplicated code\nbecause the error-throwing code block includes a switch on command\ntype. However, it also gives us an opportunity to be a more specific\nin the new error, with detail for this specific case.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 4 Jan 2020 18:12:13 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "On Sat, 4 Jan 2020 at 18:12, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sat, 4 Jan 2020 at 17:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > > That included a change to rewriteTargetListIU() to prevent it from\n> > > adding dummy targetlist entries for unassigned-to attributes for\n> > > auto-updatable views, in case they are no longer simple references to\n> > > the underlying relation. Instead, that is left to expand_targetlist(),\n> > > as for a normal table. However, in this case (an UPDATE on a view with\n> > > a conditional rule), the target relation of the original query isn't\n> > > rewritten (we leave it to the executor to report the error), and so\n> > > expand_targetlist() ends up adding a new targetlist entry that\n> > > references the target relation, which is still the original view.\n> >\n> > So why did we leave it to the executor to throw an error? I have\n> > a feeling it was either because the rewriter didn't have (easy?)\n> > access to the info, or it seemed like it'd be duplicating code.\n> >\n> I think that the required information is easily available in the\n> rewriter ...\n\nHere's a patch along those lines. Yes, it's a little more code\nduplication, but I think it's worth it for the more detailed error.\nThere was no previous regression test coverage of this case so I added\nsome (all other test output is unaltered).\n\nThe existing comment in the executor check clearly implied that it\nthought that error was unreachable there, and I think it now is, but\nit seems worth leaving it just in case.\n\nRegards,\nDean",
"msg_date": "Tue, 7 Jan 2020 11:00:30 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "On Tue, 7 Jan 2020 at 11:00, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Here's a patch along those lines. Yes, it's a little more code\n> duplication, but I think it's worth it for the more detailed error.\n> There was no previous regression test coverage of this case so I added\n> some (all other test output is unaltered).\n>\n\n[finally getting back to this]\n\nHearing no objections, I have pushed and back-patched this.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 14 Jan 2020 10:18:38 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "Thanks a lot, Dean, to look into this and also sorry for the late reply.\n\nOn Sun, Jan 5, 2020 at 12:08 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> Tracing it through, this seems to be a result of\n> cab5dc5daf2f6f5da0ce79deb399633b4bb443b5 which added support for\n> updatable views with a mix of updatable and non-updatable columns.\n> That included a change to rewriteTargetListIU() to prevent it from\n> adding dummy targetlist entries for unassigned-to attributes for\n> auto-updatable views, in case they are no longer simple references to\n> the underlying relation. Instead, that is left to expand_targetlist(),\n> as for a normal table. However, in this case (an UPDATE on a view with\n> a conditional rule), the target relation of the original query isn't\n> rewritten (we leave it to the executor to report the error), and so\n> expand_targetlist() ends up adding a new targetlist entry that\n> references the target relation, which is still the original view. But\n> when the planner bulds the simple_rel_array, it only adds entries for\n> relations referenced in the query's jointree, which only includes the\n> base table by this point, not the view. Thus the new targetlist entry\n> added by expand_targetlist() refers to a NULL slot in the\n> simple_rel_array, and it blows up.\n>\n> That's a great analysis of this issue.\n\n\n> Given that this is a query that's going to fail anyway, I'm inclined\n> to think that the right thing to do is to throw the error sooner, in\n> rewriteQuery(), rather than attempting to plan a query that cannot be\n> executed.\n>\n\nI am wondering whether a simple auto-updatable view can have a conditional\nupdate instead rule.\nFor the test case I added, does bellow plan looks reasonable?\ngpadmin=# explain UPDATE v1 SET b = 2 WHERE a = 1;\n QUERY PLAN\n-------------------------------------------------------------------\n Insert on t2 (cost=0.00..49.55 rows=1 width=8)\n -> Seq Scan on t1 (cost=0.00..49.55 rows=1 width=8)\n Filter: ((b > 100) AND (a > 2) AND (a = 1))\n\n Update on t1 (cost=0.00..49.55 rows=3 width=14)\n -> Seq Scan on t1 (cost=0.00..49.55 rows=3 width=14)\n Filter: (((a > 2) IS NOT TRUE) AND (b > 100) AND (a = 1))\n(7 rows)\n\ngpadmin=# UPDATE v1 SET b = 2 WHERE a = 1;\nUPDATE 1\n\nThe document also says that:\n\"There is a catch if you try to use conditional rules for *complex view*\nupdates: there must be an unconditional\nINSTEAD rule for each action you wish to allow on the view\" which makes me\nthink a simple view can have a\nconditional INSTEAD rule. And the document is explicitly changed in commit\na99c42f291421572aef2:\n- There is a catch if you try to use conditional rules for view\n+ There is a catch if you try to use conditional rules for complex view\n\nDoes that mean we should support conditional rules for a simple view?\n\nRegards,\nPengzhou Tang\n\nThanks a lot, Dean, to look into this and also sorry for the late reply.On Sun, Jan 5, 2020 at 12:08 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\nTracing it through, this seems to be a result of\ncab5dc5daf2f6f5da0ce79deb399633b4bb443b5 which added support for\nupdatable views with a mix of updatable and non-updatable columns.\nThat included a change to rewriteTargetListIU() to prevent it from\nadding dummy targetlist entries for unassigned-to attributes for\nauto-updatable views, in case they are no longer simple references to\nthe underlying relation. Instead, that is left to expand_targetlist(),\nas for a normal table. However, in this case (an UPDATE on a view with\na conditional rule), the target relation of the original query isn't\nrewritten (we leave it to the executor to report the error), and so\nexpand_targetlist() ends up adding a new targetlist entry that\nreferences the target relation, which is still the original view. But\nwhen the planner bulds the simple_rel_array, it only adds entries for\nrelations referenced in the query's jointree, which only includes the\nbase table by this point, not the view. Thus the new targetlist entry\nadded by expand_targetlist() refers to a NULL slot in the\nsimple_rel_array, and it blows up.\nThat's a great analysis of this issue. \nGiven that this is a query that's going to fail anyway, I'm inclined\nto think that the right thing to do is to throw the error sooner, in\nrewriteQuery(), rather than attempting to plan a query that cannot be\nexecuted. I am wondering whether a simple auto-updatable view can have a conditional update instead rule.For the test case I added, does bellow plan looks reasonable?gpadmin=# explain UPDATE v1 SET b = 2 WHERE a = 1; QUERY PLAN------------------------------------------------------------------- Insert on t2 (cost=0.00..49.55 rows=1 width=8) -> Seq Scan on t1 (cost=0.00..49.55 rows=1 width=8) Filter: ((b > 100) AND (a > 2) AND (a = 1)) Update on t1 (cost=0.00..49.55 rows=3 width=14) -> Seq Scan on t1 (cost=0.00..49.55 rows=3 width=14) Filter: (((a > 2) IS NOT TRUE) AND (b > 100) AND (a = 1))(7 rows) gpadmin=# UPDATE v1 SET b = 2 WHERE a = 1;UPDATE 1The document also says that:\"There is a catch if you try to use conditional rules for complex view updates: there must be an unconditionalINSTEAD rule for each action you wish to allow on the view\" which makes me think a simple view can have aconditional INSTEAD rule. And the document is explicitly changed in commit a99c42f291421572aef2:- There is a catch if you try to use conditional rules for view+ There is a catch if you try to use conditional rules for complex viewDoes that mean we should support conditional rules for a simple view?Regards,Pengzhou Tang",
"msg_date": "Fri, 17 Jan 2020 14:13:59 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "On Fri, 17 Jan 2020 at 06:14, Pengzhou Tang <ptang@pivotal.io> wrote:\n>\n> I am wondering whether a simple auto-updatable view can have a conditional update instead rule.\n\nWell, the decision reached in [1] was that we wouldn't allow that. We\ncould decide to allow it now as a new feature enhancement, but it\nwouldn't be a back-patchable bug-fix, and to be honest I wouldn't be\nparticularly excited about adding such a feature now. We already get\nenough reports related to multiple rule actions behaving in\ncounter-intuitive ways that trip up users. I don't think we should be\nenhancing the rule system, but rather encouraging users not to use it\nand use triggers instead.\n\n> The document also says that:\n> \"There is a catch if you try to use conditional rules for complex view updates: there must be an unconditional\n> INSTEAD rule for each action you wish to allow on the view\" which makes me think a simple view can have a\n> conditional INSTEAD rule. And the document is explicitly changed in commit a99c42f291421572aef2:\n> - There is a catch if you try to use conditional rules for view\n> + There is a catch if you try to use conditional rules for complex view\n>\n> Does that mean we should support conditional rules for a simple view?\n>\n\nNo. I don't recall why that wording was changed in that commit, but I\nthink it's meant to be read as \"complex updates on views\" -- i.e., the\nword \"complex\" refers to the complexity of the update logic, not the\ncomplexity of the view. Nothing in that paragraph is related to\ncomplex vs simple views, it's about complex sets of rules.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/25777.1352325888%40sss.pgh.pa.us\n\n\n",
"msg_date": "Fri, 17 Jan 2020 14:36:15 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
},
{
"msg_contents": "> I am wondering whether a simple auto-updatable view can have a\n> conditional update instead rule.\n>\n> Well, the decision reached in [1] was that we wouldn't allow that. We\n> could decide to allow it now as a new feature enhancement, but it\n> wouldn't be a back-patchable bug-fix, and to be honest I wouldn't be\n> particularly excited about adding such a feature now. We already get\n> enough reports related to multiple rule actions behaving in\n> counter-intuitive ways that trip up users. I don't think we should be\n> enhancing the rule system, but rather encouraging users not to use it\n> and use triggers instead.\n>\n> Ok, that makes sense, thanks for the explanation.\n\n> I am wondering whether a simple auto-updatable view can have a conditional update instead rule.\n\nWell, the decision reached in [1] was that we wouldn't allow that. We\ncould decide to allow it now as a new feature enhancement, but it\nwouldn't be a back-patchable bug-fix, and to be honest I wouldn't be\nparticularly excited about adding such a feature now. We already get\nenough reports related to multiple rule actions behaving in\ncounter-intuitive ways that trip up users. I don't think we should be\nenhancing the rule system, but rather encouraging users not to use it\nand use triggers instead.\nOk, that makes sense, thanks for the explanation.",
"msg_date": "Wed, 22 Jan 2020 14:28:01 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Errors when update a view with conditional-INSTEAD rules"
}
] |
[
{
"msg_contents": "Hi!\n\nDuring work on knn-btree patchset we've faced the need to move\nfunctions/operators from contrib to core [1]. In the extension\nupgrade script we need to use @extschema@ in order to distinguish\ncontrib and core objects. However, it appears to be possible to use\n@extschema@ only in non-relocatable extensions. Comment in\nextension.c says: \"For a relocatable extension, we needn't do this.\nThere cannot be any need for @extschema@, else it wouldn't be\nrelocatable.\". I've explored that we've marked extension as\nnon-relocatable solely to use @extschema@ in script before [2].\n\nSo, it appears that comment in extension.c isn't true. There is at\nleast two precedents when relocatable extension needs to use\n@extschema@. We've marked possibly relocatable extension as\nnon-relocatable once. And we could do it at the second time.\nExtension relocatability doesn't seem to me much value to sacrifice.\nBut nevertheless should we allow relocatable extension to use\n@extschema@. Any thoughts?\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfdsWsb9T1eHdX%2Br7wnXbGJKQxSffc8gTGp4ZA2ewP49Hog%40mail.gmail.com\n2. https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=de623f33353c96657651f9c3a6c8756616c610e4;hp=0024e348989254d48dc4afe9beab98a6994a791e\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 3 Dec 2019 15:06:09 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Allow relocatable extension to use @extschema@?"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> But nevertheless should we allow relocatable extension to use\n> @extschema@. Any thoughts?\n\nNo. The reasoning in the comment still holds good: if you embed\n@extschema@ in an object's definition, it becomes nonrelocatable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Dec 2019 10:18:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow relocatable extension to use @extschema@?"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 6:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > But nevertheless should we allow relocatable extension to use\n> > @extschema@. Any thoughts?\n>\n> No. The reasoning in the comment still holds good: if you embed\n> @extschema@ in an object's definition, it becomes nonrelocatable.\n\nI see, allowing @extschema@ in non-relocatable extension provides easy\nway to shoot yourself in the foot.\n\nHowever, it might be still useful to be able to distinguish extension\nand core object in upgrade script of relocatable extensions. What\nabout (optional) way to set search_path to @extschema@, pg_catalog\ninstead of just @extschema@?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 3 Dec 2019 23:29:19 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow relocatable extension to use @extschema@?"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 10:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > But nevertheless should we allow relocatable extension to use\n> > @extschema@. Any thoughts?\n>\n> No. The reasoning in the comment still holds good: if you embed\n> @extschema@ in an object's definition, it becomes nonrelocatable.\n\nThat's demonstrably false, and the links included in the OP prove it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Dec 2019 09:52:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow relocatable extension to use @extschema@?"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 09:52:50AM -0500, Robert Haas wrote:\n> On Tue, Dec 3, 2019 at 10:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > > But nevertheless should we allow relocatable extension to use\n> > > @extschema@. Any thoughts?\n> >\n> > No. The reasoning in the comment still holds good: if you embed\n> > @extschema@ in an object's definition, it becomes nonrelocatable.\n> \n> That's demonstrably false, and the links included in the OP prove it.\n\nRight. Or, to add further precision, embedding @extschema@ persistently in\nthe system catalogs does make the extension nonrelocatable. The OP examples\nmerely embed @extschema@ in extension scripts, and they don't impair\nrelocation. I'm okay with the proposal, but I still wouldn't mind deprecating\nrelocatable=true.\n\n\n",
"msg_date": "Sat, 18 Jan 2020 10:28:21 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow relocatable extension to use @extschema@?"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile going through this file I noticed some inconsistencies in the\ncomments. Please find attachment for the fix.\n\n-- \nRegards,\nRafia Sabih",
"msg_date": "Tue, 3 Dec 2019 14:36:20 +0100",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": true,
"msg_subject": "Minor comment fixes for instrumentation.h"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 8:36 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> While going through this file I noticed some inconsistencies in the\n> comments. Please find attachment for the fix.\n\nCommitted. I think only the duplicated word is a clear error, but the\nother changes seem like mild improvements, so pushed the whole thing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Dec 2019 08:44:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor comment fixes for instrumentation.h"
},
{
"msg_contents": "On Thu, 5 Dec 2019 at 14:45, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Dec 3, 2019 at 8:36 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > While going through this file I noticed some inconsistencies in the\n> > comments. Please find attachment for the fix.\n>\n> Committed. I think only the duplicated word is a clear error, but the\n> other changes seem like mild improvements, so pushed the whole thing.\n\nTrue and thank you.\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Thu, 5 Dec 2019 14:51:58 +0100",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor comment fixes for instrumentation.h"
}
] |
[
{
"msg_contents": "This patch simply adds “.idea/“ to the list of global excludes across all subdirectories. This directory is created when a JetBrains IDE is used to open a project. In my specific case, Clion is creating the project directory.\n\nThe ONLY change in the patch is the “.idea/“ addition to .gitignore.\n\nDavid Nedrow\ndnedrow@me.com",
"msg_date": "Tue, 3 Dec 2019 09:47:44 -0500",
"msg_from": "David Nedrow <dnedrow@me.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Addition of JetBrains project directory to .gitignore"
},
{
"msg_contents": "> On 3 Dec 2019, at 15:47, David Nedrow <dnedrow@me.com> wrote:\n\n> This patch simply adds “.idea/“ to the list of global excludes across all subdirectories. This directory is created when a JetBrains IDE is used to open a project. In my specific case, Clion is creating the project directory.\n> \n> The ONLY change in the patch is the “.idea/“ addition to .gitignore.\n\n-1. This seems like something better suited in a local gitignore for those who\nuse Jetbrains products. See the documentation for ~/.gitignore_global.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 3 Dec 2019 15:51:28 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Addition of JetBrains project directory to .gitignore"
},
{
"msg_contents": "> On 3 Dec 2019, at 15:56, David Nedrow <dnedrow@me.com> wrote: \n\n> Hmmm. I can see that. However, there are already entries for Microsoft Visual C++ at the global level. Wouldn’t this fall into the same category?\n\nNot really, the files in the current .gitignore are artifacts of the build-\nsystem which is provided by the postgres tree (MSVC building, gcov etc); there\nare no editor specific files ignored there.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 3 Dec 2019 16:01:24 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Addition of JetBrains project directory to .gitignore"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 3 Dec 2019, at 15:47, David Nedrow <dnedrow@me.com> wrote:\n>> This patch simply adds “.idea/“ to the list of global excludes across all subdirectories. This directory is created when a JetBrains IDE is used to open a project. In my specific case, Clion is creating the project directory.\n>> \n>> The ONLY change in the patch is the “.idea/“ addition to .gitignore.\n\n> -1. This seems like something better suited in a local gitignore for those who\n> use Jetbrains products. See the documentation for ~/.gitignore_global.\n\nYeah, we already have a policy that we won't add entries for, say,\neditor backup files. This seems like the same thing. It's stuff\ngenerated by a tool you use, and you'd need it for any project\nyou work on, so a personal ~/.gitexclude seems like the answer.\n\n(Roughly speaking, I think the project policy is/should be that only\njunk files created by application of build rules in our Makefiles\nshould be excluded by our own .gitexclude files.)\n\nAs a point of reference, I have\n\n$ cat ~/.gitexclude \n*~\n*.orig\n\nto suppress emacs backup files and patch backup files respectively.\nSomebody who prefers another editor would have no use for *~.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Dec 2019 10:07:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Addition of JetBrains project directory to .gitignore"
},
{
"msg_contents": "Got it, and that makes sense.\n\nI hereby withdraw this patch. ;)\n\n- David\n\n> On Dec 3, 2019, at 10:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>>> On 3 Dec 2019, at 15:47, David Nedrow <dnedrow@me.com> wrote:\n>>> This patch simply adds “.idea/“ to the list of global excludes across all subdirectories. This directory is created when a JetBrains IDE is used to open a project. In my specific case, Clion is creating the project directory.\n>>> \n>>> The ONLY change in the patch is the “.idea/“ addition to .gitignore.\n> \n>> -1. This seems like something better suited in a local gitignore for those who\n>> use Jetbrains products. See the documentation for ~/.gitignore_global.\n> \n> Yeah, we already have a policy that we won't add entries for, say,\n> editor backup files. This seems like the same thing. It's stuff\n> generated by a tool you use, and you'd need it for any project\n> you work on, so a personal ~/.gitexclude seems like the answer.\n> \n> (Roughly speaking, I think the project policy is/should be that only\n> junk files created by application of build rules in our Makefiles\n> should be excluded by our own .gitexclude files.)\n> \n> As a point of reference, I have\n> \n> $ cat ~/.gitexclude \n> *~\n> *.orig\n> \n> to suppress emacs backup files and patch backup files respectively.\n> Somebody who prefers another editor would have no use for *~.\n> \n> regards, tom lane\n\n\n\n",
"msg_date": "Tue, 3 Dec 2019 14:33:14 -0500",
"msg_from": "David Nedrow <dnedrow@me.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Addition of JetBrains project directory to .gitignore"
},
{
"msg_contents": "On Tue, Dec 03, 2019 at 10:07:08AM -0500, Tom Lane wrote:\n> As a point of reference, I have\n> \n> $ cat ~/.gitexclude \n> *~\n> *.orig\n> \n> to suppress emacs backup files and patch backup files respectively.\n> Somebody who prefers another editor would have no use for *~.\n\nHere are extra entries I use for example:\n# Files created by vim for unsaved changes\n.*.swp\n# Files created by emacs for unsaved changes\n.#*\n# Temporary files created during compilation\n*.o-*\n# Tags generated by etags or ctags\nTAGS\ntags\n# Files created by ./configure\nconftest.c\nconftest.err\nconfdefs.h\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 10:06:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Addition of JetBrains project directory to .gitignore"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nOne of our customers complains about that some sessions generates \"too \nmuch WAL records\".\nCertainly WAL activity doesn't indicate a problem itself: huge workload \ncause huge WAL activity.\nBut them are trying to understand which clients produces so much \ndatabase changes and complain that there is\nno way to get such information in Postgres. For example in Oracle this \nproblems can be solved in this way:\n\nhttp://www.dba-oracle.com/t_find_session_generating_high_redo.htm\n\nUnfortunately there is actually no simple and accurate way to calculate \namount of WAL produced by the particular session.\nIt is possible to parse WAL (for example using pg_waldump), then using \nXID->pid mapping accumulate size of transactions produced by each backend.\nBut this is very inconvenient and not DBA friendly approach.\n\nI have implemented small patch which collects such statistic.\nI have added walWritten field to PGPROC and increment it in \nCopyXLogRecordToWAL.\nIt is possible to inspect this field using pg_stat_get_wal_activity(pid) \nfunction and also I have added\npg_stat_wal_activity which just adds wal_written to standard \npg_activity view:\n\npostgres=# select pid, backend_type, wal_written from pg_stat_wal_activity ;\n pid | backend_type | wal_written\n------+------------------------------+-------------\n 4405 | autovacuum launcher | 0\n 4407 | logical replication launcher | 0\n 4750 | client backend | 86195\n 4403 | background writer | 204\n 4402 | checkpointer | 328\n 4404 | walwriter | 0\n(6 rows)\n\n\n\nI wonder if such extra statistic about session WAL activity is \nconsidered to be useful?\nThe only problem with this approach from my point of view is adding 8 \nbytes to PGPROC.\nBut there are already so many fields in this structure \n(sizeof(PGPROC)=816), that adding yet another 8 bytes should not be \nnoticeable.\n\nComments are welcome.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 3 Dec 2019 18:01:28 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Session WAL activity"
},
{
"msg_contents": "Hello.\n\nAt Tue, 3 Dec 2019 18:01:28 +0300, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote in \n> Hi hackers,\n> \n> One of our customers complains about that some sessions generates \"too\n> much WAL records\".\n> Certainly WAL activity doesn't indicate a problem itself: huge\n> workload cause huge WAL activity.\n> But them are trying to understand which clients produces so much\n> database changes and complain that there is\n> no way to get such information in Postgres. For example in Oracle this\n> problems can be solved in this way:\n> \n> http://www.dba-oracle.com/t_find_session_generating_high_redo.htm\n> \n> Unfortunately there is actually no simple and accurate way to\n> calculate amount of WAL produced by the particular session.\n> It is possible to parse WAL (for example using pg_waldump), then using\n> XID->pid mapping accumulate size of transactions produced by each\n> backend.\n> But this is very inconvenient and not DBA friendly approach.\n> \n> I have implemented small patch which collects such statistic.\n> I have added walWritten field to PGPROC and increment it in\n> CopyXLogRecordToWAL.\n> It is possible to inspect this field using\n> pg_stat_get_wal_activity(pid) function and also I have added\n> pg_stat_wal_activity which just adds wal_written to standard\n> pg_activity view:\n> \n> postgres=# select pid, backend_type, wal_written from\n> pg_stat_wal_activity ;\n> pid | backend_type | wal_written\n> ------+------------------------------+-------------\n> 4405 | autovacuum launcher | 0\n> 4407 | logical replication launcher | 0\n> 4750 | client backend | 86195\n> 4403 | background writer | 204\n> 4402 | checkpointer | 328\n> 4404 | walwriter | 0\n> (6 rows)\n> \n> \n> \n> I wonder if such extra statistic about session WAL activity is\n> considered to be useful?\n>\n> The only problem with this approach from my point of view is adding 8\n> bytes to PGPROC.\n> But there are already so many fields in this structure\n> (sizeof(PGPROC)=816), that adding yet another 8 bytes should not be\n> noticeable.\n> \n> Comments are welcome.\n\nIt seems to be useful to me. We also might want statistics of other\nsession IOs. In that case the table name would be\n\"pg_stat_session/process_activity\". We are aleady collecting most\nkinds of the IO activity but it loses session information...\n\nBriefly looking the patch, I have some comments on it.\n\nAs mentioned above, if we are intending future exantion of the\nsession-stats table, the name should be changed.\n\nBackend status is more appropriate than PGPROC. See pgstat.c.\n\nSome kind of locking is needed to update the fields on shared segment.\n(LWLocks for PGPROC and PGSTAT_BEGIN/END_WRITE_ACTIVITY for\nPgBackendStatus)\n\nKnitpickings:\n\nThe patch contains a trace of older trial in\npg_stat_get_activity. Proc OID should be >= 8000 in\npatches. src/include/catalog/unused_oids offers some OID for you.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Dec 2019 14:33:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "\n\nOn 04.12.2019 8:33, Kyotaro Horiguchi wrote:\n> It seems to be useful to me. We also might want statistics of other\n> session IOs. In that case the table name would be\n> \"pg_stat_session/process_activity\". We are aleady collecting most\n> kinds of the IO activity but it loses session information...\n\nWell, actually monitoring disk activity for the particular \nbackend/session can be easily done using some external tools\n(just because now in Postgres session=backend=process). So we can \nmonitor IO of processes, for example using iotop at Unix\nor Performance Monitor at Windows.\n\nCertainly it is more convenient to have such statstic inside Postgres. \nBut I am not sure if it is really needed.\nConcerning WAL activity situation is more obscure: records can be added \nto the WAL by one process, but written by another.\nThis is why it is not possible to use some external tools.\n\n\n>\n> Briefly looking the patch, I have some comments on it.\n>\n> As mentioned above, if we are intending future exantion of the\n> session-stats table, the name should be changed.\n>\n> Backend status is more appropriate than PGPROC. See pgstat.c.\nDo you mean pgstat_fetch_stat_beentry?\nBut why it is better than storing this information directly in PGPROC?\nAs far as this information ha to be updated from XLogInsertRecord and \nit seems to be very performance critical function my intention was to \nminimize\noverhead of maintaining this statistic. It is hard to imagine something \nmore efficient than just MyProc->walWriten += write_len;\n\nAlso pgstat_fetch_stat_beentry is taken backend id, which is not \nreported in pg_stat_activity view and this is why it is more\nconvenient to pass PID to pg_stat_get_wal_activity. Certainly it is \npossible to map PID to backendid, but... why actually do we need to\nperform such mapping if simpler solution exists?\n\n> Some kind of locking is needed to update the fields on shared segment.\n> (LWLocks for PGPROC and PGSTAT_BEGIN/END_WRITE_ACTIVITY for\n> PgBackendStatus)\nThis information is updated locally only by backend itself.\nCertainly update of 64 bit field is not atomic at 32-but architectures.\nBut it is just statistic. I do not think that it will be fatal if for a \nmoment\nwe can see some incorrect value of written WAL bytes (and at most \nplatforms this\nupdate will be atomic).\n\nAs I already wrote above, this information in updated in performance \ncritical place and this is why\nI want to avoid any expensive operations (such as locking or atomic \nupdates) as much as possible.\n> Knitpickings:\n>\n> The patch contains a trace of older trial in\n> pg_stat_get_activity. Proc OID should be >= 8000 in\n> patches. src/include/catalog/unused_oids offers some OID for you.\n>\n\nWill fix it.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 4 Dec 2019 16:40:27 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "Hi.\n\nAt Wed, 4 Dec 2019 16:40:27 +0300, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote in \n> \n> \n> On 04.12.2019 8:33, Kyotaro Horiguchi wrote:\n> > It seems to be useful to me. We also might want statistics of other\n> > session IOs. In that case the table name would be\n> > \"pg_stat_session/process_activity\". We are aleady collecting most\n> > kinds of the IO activity but it loses session information...\n> \n> Well, actually monitoring disk activity for the particular\n> backend/session can be easily done using some external tools\n> (just because now in Postgres session=backend=process). So we can\n> monitor IO of processes, for example using iotop at Unix\n> or Performance Monitor at Windows.\n\nOperations that completes on shared buffers cannot be monitored that\nway. This is the same with WAL writing.\n\n> Certainly it is more convenient to have such statstic inside\n> Postgres. But I am not sure if it is really needed.\n> Concerning WAL activity situation is more obscure: records can be\n> added to the WAL by one process, but written by another.\n> This is why it is not possible to use some external tools.\n\nFor clarity, I didn't suggest that this patch should include general\nsession IO statistics. Just the view name looked a bit specific.\n\n> > Briefly looking the patch, I have some comments on it.\n> >\n> > As mentioned above, if we are intending future exantion of the\n> > session-stats table, the name should be changed.\n> >\n> > Backend status is more appropriate than PGPROC. See pgstat.c.\n> Do you mean pgstat_fetch_stat_beentry?\n> But why it is better than storing this information directly in PGPROC?\n\nNo it cannot be used there for performance reasons as you are\nsaying. I'm not sure it's acceptable, but we can directly access\nbackend status the same way if we expose MyBEEntry (and update it\nthrough a macro or a inline function). If we don't need per record\nresolution for the value, we can update a process local variable at\nWAL-write time then write it to backend status at commit time or at\nthe same timing as pgstat reporting.\n\nAccording to my faint memory, PGPROC is thought that it must be kept\nas small as possible for the reasons of CPU caches, that is the reason\nfor PgBackendStatus.\n\n> As far as this information ha to be updated from XLogInsertRecord and\n> it seems to be very performance critical function my intention was to\n> minimize\n> overhead of maintaining this statistic. It is hard to imagine\n> something more efficient than just MyProc->walWriten += write_len;\n> \n> Also pgstat_fetch_stat_beentry is taken backend id, which is not\n> reported in pg_stat_activity view and this is why it is more\n> convenient to pass PID to pg_stat_get_wal_activity. Certainly it is\n> possible to map PID to backendid, but... why actually do we need to\n> perform such mapping if simpler solution exists?\n> \n> > Some kind of locking is needed to update the fields on shared segment.\n> > (LWLocks for PGPROC and PGSTAT_BEGIN/END_WRITE_ACTIVITY for\n> > PgBackendStatus)\n> This information is updated locally only by backend itself.\n> Certainly update of 64 bit field is not atomic at 32-but\n> architectures.\n> But it is just statistic. I do not think that it will be fatal if for\n> a moment\n> we can see some incorrect value of written WAL bytes (and at most\n> platforms this\n> update will be atomic).\n\nAt least reader needs to take procarray lock to keep PID-WALwrite\nconsistency, in order to prevent reading WALwrite values for a wrong\nprocess.\n\n> As I already wrote above, this information in updated in performance\n> critical place and this is why\n> I want to avoid any expensive operations (such as locking or atomic\n> updates) as much as possible.\n\nI'm afraid that the reason doesn't justify expanding PGPROC..\n\n> > Knitpickings:\n> >\n> > The patch contains a trace of older trial in\n> > pg_stat_get_activity. Proc OID should be >= 8000 in\n> > patches. src/include/catalog/unused_oids offers some OID for you.\n> >\n> \n> Will fix it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Dec 2019 11:37:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "\n\nOn 05.12.2019 5:37, Kyotaro Horiguchi wrote:\n> It seems to be useful to me. We also might want statistics of other\n>>> session IOs. In that case the table name would be\n>>> \"pg_stat_session/process_activity\". We are aleady collecting most\n>>> kinds of the IO activity but it loses session information...\n>> Well, actually monitoring disk activity for the particular\n>> backend/session can be easily done using some external tools\n>> (just because now in Postgres session=backend=process). So we can\n>> monitor IO of processes, for example using iotop at Unix\n>> or Performance Monitor at Windows.\n> Operations that completes on shared buffers cannot be monitored that\n> way. This is the same with WAL writing.\n\nThe questions is what we are going to monitor?\nAmount of read/dirtied buffers or amount of disk ops?\n>\n>> Certainly it is more convenient to have such statstic inside\n>> Postgres. But I am not sure if it is really needed.\n>> Concerning WAL activity situation is more obscure: records can be\n>> added to the WAL by one process, but written by another.\n>> This is why it is not possible to use some external tools.\n> For clarity, I didn't suggest that this patch should include general\n> session IO statistics. Just the view name looked a bit specific.\n\nI am not sure if pg_stat_wal_activity view should be added at all.\nWe can just add pg_stat_get_wal_activity function and let user specify \nPID of backend himself (for example by performing join with \npg_stat_activity).\nI proposed name pg_stat_wal_activity just for similarity with \npg_stat_activity but can use any other proposed name.\n\n>\n>>> Briefly looking the patch, I have some comments on it.\n>>>\n>>> As mentioned above, if we are intending future exantion of the\n>>> session-stats table, the name should be changed.\n>>>\n>>> Backend status is more appropriate than PGPROC. See pgstat.c.\n>> Do you mean pgstat_fetch_stat_beentry?\n>> But why it is better than storing this information directly in PGPROC?\n> No it cannot be used there for performance reasons as you are\n> saying. I'm not sure it's acceptable, but we can directly access\n> backend status the same way if we expose MyBEEntry (and update it\n> through a macro or a inline function). If we don't need per record\n> resolution for the value, we can update a process local variable at\n> WAL-write time then write it to backend status at commit time or at\n> the same timing as pgstat reporting.\n>\n> According to my faint memory, PGPROC is thought that it must be kept\n> as small as possible for the reasons of CPU caches, that is the reason\n> for PgBackendStatus.\nWhy do you think that adding one addition (without any locks and \nfunction calls) to CopyXLogRecordToWAL is not acceptable.\nIt is just one instruction added to expensive functions. At least I have \nnot noticed any measurable impact on performance.\n\nConcerning keeping PGPROC size as small as possible, I agree that it is \nreasonable argument.\nBut even now it is very large (816 bytes) and adding extra 8 bytes will \nincrease it on less than 1%.\n\n>>\n>> This information is updated locally only by backend itself.\n>> Certainly update of 64 bit field is not atomic at 32-but\n>> architectures.\n>> But it is just statistic. I do not think that it will be fatal if for\n>> a moment\n>> we can see some incorrect value of written WAL bytes (and at most\n>> platforms this\n>> update will be atomic).\n> At least reader needs to take procarray lock to keep PID-WALwrite\n> consistency, in order to prevent reading WALwrite values for a wrong\n> process.\n\nSorry, but I still do not understand whats wrong can happen if reader \nwill see WAL activity of wrong process.\nYes, correspondent backend may be already terminated and its PGPROC \nentry can be reused for some other process.\nIn this case we can wrongly attribute WAL traffic generated by \nterminated backend to the new process\nor report zero traffic for old process. But this information is mostly \nneeded for live (active) backends. So I do not think\nthat race conditions here are so critical.\n\nRight now pg_stat_activity also accessing PGPROC to obtain wait event \ninformation and also not taking any locks.\nSo it can wrongly report backend status. But I never heard that somebody \ncomplains about it.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 5 Dec 2019 12:23:40 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 12:23:40PM +0300, Konstantin Knizhnik wrote:\n> Concerning keeping PGPROC size as small as possible, I agree that it is\n> reasonable argument.\n> But even now it is very large (816 bytes) and adding extra 8 bytes will\n> increase it on less than 1%.\n\nIt does not mean that we should add all kind of things to PGPROC as\nthat's a structure sensitive enough already. By the way, why do you \nassume that 8-byte reads are always safe and atomic in the patch?\n\n> Right now pg_stat_activity also accessing PGPROC to obtain wait event\n> information and also not taking any locks.\n> So it can wrongly report backend status. But I never heard that somebody\n> complains about it.\n\nPlease see pgstat.h, close to pgstat_report_wait_start().\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 10:57:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "\n\nOn 06.12.2019 4:57, Michael Paquier wrote:\n> On Thu, Dec 05, 2019 at 12:23:40PM +0300, Konstantin Knizhnik wrote:\n>> Concerning keeping PGPROC size as small as possible, I agree that it is\n>> reasonable argument.\n>> But even now it is very large (816 bytes) and adding extra 8 bytes will\n>> increase it on less than 1%.\n> It does not mean that we should add all kind of things to PGPROC as\n> that's a structure sensitive enough already. By the way, why do you\n> assume that 8-byte reads are always safe and atomic in the patch?\nI never assumed it - in the previous mail I wrote:\n\nCertainly update of 64 bit field is not atomic at 32-but architectures.\n\n>> Right now pg_stat_activity also accessing PGPROC to obtain wait event\n>> information and also not taking any locks.\n>> So it can wrongly report backend status. But I never heard that somebody\n>> complains about it.\n> Please see pgstat.h, close to pgstat_report_wait_start().\n\nSorry, I do not understand what should I look for?\nDo you mean this comment:\n ��� /*\n ��� �* Since this is a four-byte field which is always read and written as\n ��� �* four-bytes, updates are atomic.\n ���� */\n\nYes, I already� have noticed that as far as walWritten is 64-bit, its \nupdate is not atomic at 32-bit platforms and so it is possible to see \nsometimes incorrect values.\nSo monotone observe of walWritten can be violated. From my point of view \nit is not so critical to enforce update of this fields under lock or \naccumulating result in local variable with later write it to backend \nstatus at commit time as Kyotaro proposed. Monitoring of WAL activity is \nespecially interested for long living transactions and from my point of \nview it is much more\nimportant to be able to see up-to-date but may be not always correct \ninformation then do not see any information at all before commit.\nPlease also take in account the percent of 32-bit Postgres installations \nand probability of observing non-atomic update of 64-bit walWritten \nfield (I think that you will have no chances to see it even if you will \nrun Postgres for a years).\n\nBut what I mean by \"wrongly report backend wait event status\" is that� \npg_stat_activity may report wait event status for wrong backend.\nI.e. if backend is already terminated and its PGPROC entry is reused by \nsome other backend, than you can see incorrect wait event information:\nbackend with such PID actually never sleep on this event.\n\n\nIn my reply\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 6 Dec 2019 11:22:14 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "At Fri, 6 Dec 2019 11:22:14 +0300, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote in \n> \n> \n> On 06.12.2019 4:57, Michael Paquier wrote:\n> > On Thu, Dec 05, 2019 at 12:23:40PM +0300, Konstantin Knizhnik wrote:\n> > Please see pgstat.h, close to pgstat_report_wait_start().\n> \n> Sorry, I do not understand what should I look for?\n> Do you mean this comment:\n> /*\n> * Since this is a four-byte field which is always read and\n> written as\n> * four-bytes, updates are atomic.\n> */\n> \n> Yes, I already have noticed that as far as walWritten is 64-bit, its\n> update is not atomic at 32-bit platforms and so it is possible to see\n> sometimes incorrect values.\n> So monotone observe of walWritten can be violated. From my point of\n> view it is not so critical to enforce update of this fields under lock\n> or accumulating result in local variable with later write it to\n> backend status at commit time as Kyotaro proposed. Monitoring of WAL\n> activity is especially interested for long living transactions and\n> from my point of view it is much more\n> important to be able to see up-to-date but may be not always correct\n> information then do not see any information at all before commit.\n> Please also take in account the percent of 32-bit Postgres\n> installations and probability of observing non-atomic update of 64-bit\n> walWritten field (I think that you will have no chances to see it even\n> if you will run Postgres for a years).\n\nStill I'm not sure non-atomic write is acceptable, but I agree on the\nnecessity of updating it during a transaction. Couldn't we update\nshared stats every n bytes (XLOG_BLCKSZ or such) or every command end?\n\nI think we should refrain from inserting an instruction within the\nWALInsertLock section, but I'm not sure which is better between \"var\n+= var\" within the section and \"if (inserted) var += var;\" outside. If\nwe can ignore the possitbility of the case where xlogswitch is\nomitted, the \"if (inserted)\" is not needed.\n\n> But what I mean by \"wrongly report backend wait event status\" is that \n> pg_stat_activity may report wait event status for wrong backend.\n> I.e. if backend is already terminated and its PGPROC entry is reused\n> by some other backend, than you can see incorrect wait event\n> information:\n> backend with such PID actually never sleep on this event.\n\nI saw a case where an entry with very old xact_start_timestamp\nsuddenly popped up in pg_stat_activity but I haven't found the cause..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 11 Dec 2019 13:26:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "On 11.12.2019 7:26, Kyotaro Horiguchi wrote:\n>\n> Still I'm not sure non-atomic write is acceptable, but I agree on the\n> necessity of updating it during a transaction. Couldn't we update\n> shared stats every n bytes (XLOG_BLCKSZ or such) or every command end?\n>\n> I think we should refrain from inserting an instruction within the\n> WALInsertLock section, but I'm not sure which is better between \"var\n> += var\" within the section and \"if (inserted) var += var;\" outside. If\n> we can ignore the possitbility of the case where xlogswitch is\n> omitted, the \"if (inserted)\" is not needed.\n\nI think that 32-bit Postgres installations are really exotic, but I \nagree that showing incorrect result (even with very small probability)\nis not acceptable behavior in this case. I attached new versoin of the \npatch with use pg_atomic_write_u64 for updating walWritten field.\nAs far as at 64-bit systems, pg_atomic_write_u64and pg_atomic_read_u64 \nare translated to ordinary memory access, them should not have some \nnegative\nimpact on performance.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 11 Dec 2019 12:37:57 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "On Fri, 6 Dec 2019 at 09:57, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Dec 05, 2019 at 12:23:40PM +0300, Konstantin Knizhnik wrote:\n> > Concerning keeping PGPROC size as small as possible, I agree that it is\n> > reasonable argument.\n> > But even now it is very large (816 bytes) and adding extra 8 bytes will\n> > increase it on less than 1%.\n>\n> It does not mean that we should add all kind of things to PGPROC as\n> that's a structure sensitive enough already.\n\n\nRight. It's not as critical as PGXACT, but PGPROC is still significant for\nscalability and connection count limits.\n\nIt's a shame we can't really keep most of it in backend-private memory and\ncopy it to requestors when they want it, say into a temporary DSA or over a\nshm_mq. But our single threaded model means we just cannot rely on backends\nbeing responsive in a timely enough manner to supply data on-demand. That\ndoesn't mean we have to push it to PGPROC though: we could be sending the\nparts that don't have to be super-fresh to the stats collector or a new\nrelated process for active session stats and letting it aggregate them.\n\nThat's way beyond the scope of this patch though. So personally I'm OK with\nthe new PGPROC field. Visibility into Pg's activity is woefully limited and\nsomething we need to prioritize more.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 6 Dec 2019 at 09:57, Michael Paquier <michael@paquier.xyz> wrote:On Thu, Dec 05, 2019 at 12:23:40PM +0300, Konstantin Knizhnik wrote:\n> Concerning keeping PGPROC size as small as possible, I agree that it is\n> reasonable argument.\n> But even now it is very large (816 bytes) and adding extra 8 bytes will\n> increase it on less than 1%.\n\nIt does not mean that we should add all kind of things to PGPROC as\nthat's a structure sensitive enough already.Right. It's not as critical as PGXACT, but PGPROC is still significant for scalability and connection count limits.It's a shame we can't really keep most of it in backend-private memory and copy it to requestors when they want it, say into a temporary DSA or over a shm_mq. But our single threaded model means we just cannot rely on backends being responsive in a timely enough manner to supply data on-demand. That doesn't mean we have to push it to PGPROC though: we could be sending the parts that don't have to be super-fresh to the stats collector or a new related process for active session stats and letting it aggregate them.That's way beyond the scope of this patch though. So personally I'm OK with the new PGPROC field. Visibility into Pg's activity is woefully limited and something we need to prioritize more.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 12 Dec 2019 09:31:22 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 09:31:22AM +0800, Craig Ringer wrote:\n> On Fri, 6 Dec 2019 at 09:57, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Dec 05, 2019 at 12:23:40PM +0300, Konstantin Knizhnik wrote:\n> > Concerning keeping PGPROC size as small as possible, I agree that it is\n> > reasonable argument.\n> > But even now it is very large (816 bytes) and adding extra 8 bytes will\n> > increase it on less than 1%.\n> \n> It does not mean that we should add all kind of things to PGPROC as\n> that's a structure sensitive enough already.\n> \n> \n> Right. It's not as critical as PGXACT, but PGPROC is still significant for\n> scalability and connection count limits.\n> \n> It's a shame we can't really keep most of it in backend-private memory and copy\n> it to requestors when they want it, say into a temporary DSA or over a shm_mq.\n> But our single threaded model means we just cannot rely on backends being\n> responsive in a timely enough manner to supply data on-demand. That doesn't\n> mean we have to push it to PGPROC though: we could be sending the parts that\n> don't have to be super-fresh to the stats collector or a new related process\n> for active session stats and letting it aggregate them.\n> \n> That's way beyond the scope of this patch though. So personally I'm OK with the\n> new PGPROC field. Visibility into Pg's activity is woefully limited and\n> something we need to prioritize more.\n\nUh, how much does the new field get us near the CPU cache line max size\nfor a single PGPROC entry?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:38:32 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Session WAL activity"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-20 16:38:32 -0500, Bruce Momjian wrote:\n> On Thu, Dec 12, 2019 at 09:31:22AM +0800, Craig Ringer wrote:\n> > On Fri, 6 Dec 2019 at 09:57, Michael Paquier <michael@paquier.xyz> wrote:\n> > \n> > On Thu, Dec 05, 2019 at 12:23:40PM +0300, Konstantin Knizhnik wrote:\n> > > Concerning keeping PGPROC size as small as possible, I agree that it is\n> > > reasonable argument.\n> > > But even now it is very large (816 bytes) and adding extra 8 bytes will\n> > > increase it on less than 1%.\n> > \n> > It does not mean that we should add all kind of things to PGPROC as\n> > that's a structure sensitive enough already.\n\nWell, but we don't keep other stats in PGPROC, even when we have them in\nshared memory? It seems like PgBackendStatus or such might be a better\nplace?\n\n\n> > Right. It's not as critical as PGXACT, but PGPROC is still significant for\n> > scalability and connection count limits.\n> > \n> > It's a shame we can't really keep most of it in backend-private memory and copy\n> > it to requestors when they want it, say into a temporary DSA or over\n> > a shm_mq.\n\nI don't understand what that would buy? Commonly accessed field are just\ngoing to be in L2 or such, with the cacheline being in\nmodified/exclusive state. The problem isn't that fields / cachelines\n*can* be accessed by other backends, it's only a problem *if* they're\nfrequently accessed. And even if accessed by multiple backends, it's\nonly really a problem if there are multiple fields *and* they're also\nmodified (otherwise they can just stay in shared stated across\ncpus/sockets).\n\nThere *is* an argument for grouping fields in PGPROC by their access\npatterns. E.g. something like ->procArrayGroup* is a lot more commonly\naccessed by different backends than e.g. this proposed field.\n\n\n> > But our single threaded model means we just cannot rely on backends being\n> > responsive in a timely enough manner to supply data on-demand. That doesn't\n> > mean we have to push it to PGPROC though: we could be sending the parts that\n> > don't have to be super-fresh to the stats collector or a new related process\n> > for active session stats and letting it aggregate them.\n\nWe should definitely *NOT* do that. Ferrying things through the stats\ncollector is really expensive, and everyone pays the price for an\nincrease in size, not just code accessing the field. In fact, no\nreasonable quantity that's known at server start should ever go through\na mechanism as expensive as pgstat - the only reason it exists is that\nthe number of tables obviously can grow over time.\n\nThere's a thread somewhere about a patchset to move all of pgstat into\ndynamic shared memory, actually. Because the writes / reads needed by\npgstat are really bad on some systems.\n\n\n> > That's way beyond the scope of this patch though. So personally I'm OK with the\n> > new PGPROC field. Visibility into Pg's activity is woefully limited and\n> > something we need to prioritize more.\n> \n> Uh, how much does the new field get us near the CPU cache line max size\n> for a single PGPROC entry?\n\nIt's like ~13x the common size of a cache line (64bytes).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:48:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Session WAL activity"
}
] |
[
{
"msg_contents": "Responding to the recent thread on bumping the default TLS version, I realized\nthat we don't have a way to set the minimum/maximum TLS protocol version in\nclientside libpq. Setting the maximum protocol version obviously not terribly\nimportant (possibly with the exception of misbehaving middle-boxes and\ntesting), but the minimum version can be quite useful to avoid misbehaving\nand/or misconfigured servers etc.\n\nThe attached patch implements two new connection string variables for minimum\nand maximum TLS protocol version, mimicking how it's done in the backend. This\ndoes duplicate a bit of code from be-secure-openssl.c to cope with older\nversions of OpenSSL, but it seemed a too trivial duplication to create\ncommon/openssl.c (but others might disagree).\n\nThis can today be achieved by editing the local openssl configuration, but\nhaving an override in libpq to tighten down the connection parameters make it\nfar easier for the user/application IMO.\n\ncheers ./daniel",
"msg_date": "Tue, 3 Dec 2019 18:37:48 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "Hello,\n\nOn 2019/12/04 2:37, Daniel Gustafsson wrote:\n> The attached patch implements two new connection string variables for minimum\n> and maximum TLS protocol version, mimicking how it's done in the backend. This\n> does duplicate a bit of code from be-secure-openssl.c to cope with older\n> versions of OpenSSL, but it seemed a too trivial duplication to create\n> common/openssl.c (but others might disagree).\n\nI've looked at the patch and I have a couple comments.\n\n> +\t\tif (ssl_max_ver < ssl_min_ver)\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"invalid maximum SSL version specified, must be higher than minimum SSL version: %s\\n\"),\n> +\t\t\t\t\t\t\t conn->sslmaxprotocolversion);\n> +\t\t\treturn -1;\n> +\t\t}\n> +\n> +\t\tif (ssl_max_ver == -1)\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"invalid maximum SSL version specified: %s\\n\"),\n> +\t\t\t\t\t\t\t conn->sslmaxprotocolversion);\n> +\t\t\treturn -1;\n> +\t\t}\n\nI think we should raise the error \"invalid maximum SSL version \nspecified\" earlier. If ssl_protocol_version_to_openssl() returns -1 and \nssl_min_ver is valid we never reach the condition \"ssl_max_ver == -1\". \nAlso it might confuse users to get the error \"invalid maximum SSL \nversion specified, must be higher than minimum SSL version\" instead of \nformer one.\n\nSecondly I think the error \"invalid maximum SSL version specified\" \nitself might confuse users, in the case if the input is good but a build \ndoesn't support desired version. So I think it is better to do two \nchecks here: check for a correct input and check if a build supports it. \nIn the second case we may raise \"SSL version %s not supported by this \nbuild\". It is actually like backend does: guc.c checks for correct input \nusing ssl_protocol_versions_info and ssl_protocol_version_to_openssl() \nchecks if a build supports the version.\n\n-- \nArthur\n\n\n",
"msg_date": "Thu, 19 Dec 2019 11:11:31 +0900",
"msg_from": "Arthur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nHello\r\n\r\nI have applied the patch and did some basic testing with various combination of min and max TLS protocol versions. Overall the functionality works and the chosen TLS protocol aligns with the min/max TLS protocol settings on the PG server side.\r\n\r\nI agree with Arthur that it makes sense to check the validity of \"conn->sslmaxprotocolversion\" first before checking if it is larger than \"conn->sslminprotocolversion\"\r\n\r\nA small suggestion here. I see that PG server defaults TLS min version to be TLSv1.2 and max version to none. So by default the server accepts TLSv1.2 and above. I think on the client side, it also makes sense to have the same defaults as the server. In the patch, if the client does not supply \"sslminprotocolversion\", it will run to \"else\" statement and sets TLS min version to \"INT_MIN\", which is a huge negative number and of course openssl won't set it. I think this else statement can be enhanced a little to set \"sslminprotocolversion\" to TLSv1.2 by default to match the server and provide some warning message that TLS minimum has defaulted to TLSv1.2.\r\n\r\nCary\r\nHighGo Software Canada",
"msg_date": "Thu, 02 Jan 2020 21:46:44 +0000",
"msg_from": "cary huang <hcary328@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 09:46:44PM +0000, cary huang wrote:\n> I agree with Arthur that it makes sense to check the validity of\n> \"conn->sslmaxprotocolversion\" first before checking if it is larger\n> than \"conn->sslminprotocolversion\"\n\nHere I don't agree. Why not just let OpenSSL handle things with\nSSL_CTX_set_min_proto_version? We don't bother about that in the\nbackend code for that reason on top of keeping the code more simple\nwith less error handling. And things are cleaner when it comes to\nthis libpq patch by giving up with the INT_MIN hack.\n\n> A small suggestion here. I see that PG server defaults TLS min\n> version to be TLSv1.2 and max version to none. So by default the\n> server accepts TLSv1.2 and above. I think on the client side, it\n> also makes sense to have the same defaults as the server.\n\nYeah, that makes sense. Even more now that I have just removed\nsupport for OpenSSL 0.9.8 and 1.0.0 ;)\n\nThere could be an argument to lower down the default if we count for\nbackends built with OpenSSL versions older than libpq, but I am not\nready to buy that there would be many of those.\n\n> In the patch, if the client does not supply \"sslminprotocolversion\",\n> it will run to \"else\" statement and sets TLS min version to \"INT_MIN\",\n> which is a huge negative number and of course openssl won't set\n> it. I think this else statement can be enhanced a little to set\n> \"sslminprotocolversion\" to TLSv1.2 by default to match the server\n> and provide some warning message that TLS minimum has defaulted to\n> TLSv1.2. \n\nIn this patch fe-secure-openssl.c has just done a copy-paste of\nSSL_CTX_set_min_proto_version and SSL_CTX_set_max_proto_version\npresent in be-secure-openssl.c. That's not good. Could you refactor\nthat please as a separate file? For example openssl-protocol.c in\nsrc/common/? src/common/ stuff is built with -fPIC since 7143b3e so\nthere is no need to include directly the source files in the\nMakefile. A shame you cannot do that for\nssl_protocol_version_to_openssl(), so for that a note would be welcome\non top of the former backend routine and the one you are adding.\n\nThe patch has conflicts with libpq-int.h as far as I can see. That\nshould be easy enough to solve.\n\nThe patch should have tests in src/test/ssl/, like for invalid values,\nincorrect combinations leading to failures, etc.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 15:01:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 7:02 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jan 02, 2020 at 09:46:44PM +0000, cary huang wrote:\n> > I agree with Arthur that it makes sense to check the validity of\n> > \"conn->sslmaxprotocolversion\" first before checking if it is larger\n> > than \"conn->sslminprotocolversion\"\n>\n> Here I don't agree. Why not just let OpenSSL handle things with\n> SSL_CTX_set_min_proto_version? We don't bother about that in the\n> backend code for that reason on top of keeping the code more simple\n> with less error handling. And things are cleaner when it comes to\n> this libpq patch by giving up with the INT_MIN hack.\n>\n> > A small suggestion here. I see that PG server defaults TLS min\n> > version to be TLSv1.2 and max version to none. So by default the\n> > server accepts TLSv1.2 and above. I think on the client side, it\n> > also makes sense to have the same defaults as the server.\n>\n> Yeah, that makes sense. Even more now that I have just removed\n> support for OpenSSL 0.9.8 and 1.0.0 ;)\n>\n> There could be an argument to lower down the default if we count for\n> backends built with OpenSSL versions older than libpq, but I am not\n> ready to buy that there would be many of those.\n\nNot having thought about it in much detail, but it's a fairly common\nscenario to have a much newer version of libpq (and the platform it's\nbuilt on) than the server. E.g. a v12 libpq against a v9.6 postgres\nserver is very common. For example, debian based systems will\nauto-upgrade your libpq, but not your server (for obvious reasons).\nAnd it's also quite common to upgrade platforms for the application\nmuch more frequently than the database server platform.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 6 Jan 2020 14:59:14 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> Not having thought about it in much detail, but it's a fairly common\n> scenario to have a much newer version of libpq (and the platform it's\n> built on) than the server. E.g. a v12 libpq against a v9.6 postgres\n> server is very common. For example, debian based systems will\n> auto-upgrade your libpq, but not your server (for obvious reasons).\n> And it's also quite common to upgrade platforms for the application\n> much more frequently than the database server platform.\n\nYeah, there's a reason why we expect pg_dump and psql to function with\nancient server versions. We shouldn't break this scenario with\ncareless rejiggering of libpq's connection defaults.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 09:37:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 09:37:54AM -0500, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> Not having thought about it in much detail, but it's a fairly common\n>> scenario to have a much newer version of libpq (and the platform it's\n>> built on) than the server. E.g. a v12 libpq against a v9.6 postgres\n>> server is very common. For example, debian based systems will\n>> auto-upgrade your libpq, but not your server (for obvious reasons).\n>> And it's also quite common to upgrade platforms for the application\n>> much more frequently than the database server platform.\n> \n> Yeah, there's a reason why we expect pg_dump and psql to function with\n> ancient server versions. We shouldn't break this scenario with\n> careless rejiggering of libpq's connection defaults.\n\nGood points. Let's not do that then.\n--\nMichael",
"msg_date": "Tue, 7 Jan 2020 09:51:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "Thanks for review everyone! A v2 of the patch which I believe addresses all\nconcerns raised is attached.\n\n> On 6 Jan 2020, at 07:01, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jan 02, 2020 at 09:46:44PM +0000, cary huang wrote:\n>> I agree with Arthur that it makes sense to check the validity of\n>> \"conn->sslmaxprotocolversion\" first before checking if it is larger\n>> than \"conn->sslminprotocolversion\"\n> \n> Here I don't agree. Why not just let OpenSSL handle things with\n> SSL_CTX_set_min_proto_version? We don't bother about that in the\n> backend code for that reason on top of keeping the code more simple\n> with less error handling. And things are cleaner when it comes to\n> this libpq patch by giving up with the INT_MIN hack.\n\nI looked into this and it turns out that OpenSSL does nothing to prevent the\ncaller from setting a nonsensical protocol range like min=tlsv1.3,max=tlsv1.1.\nThus, it's quite easy to screw up the backend server config and get it to start\nproperly, but with quite unrelated error messages as a result on connection.\n\nSince I think this needs to be dealt with for both backend and frontend (if\nthis is accepted), I removed it from this patch to return to it in a separate\nthread.\n\n>> In the patch, if the client does not supply \"sslminprotocolversion\",\n>> it will run to \"else\" statement and sets TLS min version to \"INT_MIN\",\n>> which is a huge negative number and of course openssl won't set\n>> it. I think this else statement can be enhanced a little to set\n>> \"sslminprotocolversion\" to TLSv1.2 by default to match the server\n>> and provide some warning message that TLS minimum has defaulted to\n>> TLSv1.2. \n> \n> In this patch fe-secure-openssl.c has just done a copy-paste of\n> SSL_CTX_set_min_proto_version and SSL_CTX_set_max_proto_version\n> present in be-secure-openssl.c. That's not good. Could you refactor\n> that please as a separate file?\n\nDone. I opted for a more generic header to make usage of the code easier, not\nsure if thats ok.\n\nOne thing I noticed when looking at it is that we now have sha2_openssl.c and\nopenssl_protocol.c in src/common. For easier visual grouping of OpenSSL\nfunctionality, it makes sense to me to rename sha2_openssl.c to openssl_sha2.c,\nbut that might just be pointless churn.\n\n> The patch should have tests in src/test/ssl/, like for invalid values,\n> incorrect combinations leading to failures, etc.\n\nAlso done.\n\ncheers ./daniel",
"msg_date": "Fri, 10 Jan 2020 00:01:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 12:01:36AM +0100, Daniel Gustafsson wrote:\n> I looked into this and it turns out that OpenSSL does nothing to prevent the\n> caller from setting a nonsensical protocol range like min=tlsv1.3,max=tlsv1.1.\n> Thus, it's quite easy to screw up the backend server config and get it to start\n> properly, but with quite unrelated error messages as a result on connection.\n\nFWIW, here is the error produced, and that's confusing:\n$ psql -d \"host=localhost sslmode=require\"\npsql: error: could not connect to server: SSL error: tlsv1 alert\ninternal error\n\n> Since I think this needs to be dealt with for both backend and frontend (if\n> this is accepted), I removed it from this patch to return to it in a separate\n> thread.\n\nHEAD and back branches only care about the backend, so I think that we\nshould address this part first as your patch would I guess reuse the\ninterface we finish by using for the backend. Looking at OpenSSL, I\nagree that there is no internal logic to perform sanity checks on the\nmin/max bounds. Still I can see that OpenSSL 1.1.0 has added some\n\"get\" routines for SSL_CTX_set_min/max_proto_version:\nhttps://www.openssl.org/docs/man1.1.0/man3/SSL_CTX_set_min_proto_version.html\n\nHmmmmeuh. It would be perfect to rely only on OpenSSL for that part\nto bring some sanity, and compare the results fetched from the SSL\ncontext so as we don't have to worry about special cases in with the\nGUC reload if the parameter is not set, or the parameter value is not\nsupported. Now, OpenSSL <= 1.0.2 cannot do that, and you can get the\nvalues set only after doing the set, so adding the compatibility\nargument it is much more tempting to use our\nssl_protocol_version_to_openssl() wrapper and complain iff:\n- both the min and max are supported values.\n- min/max are incompatible.\nAnd the check needs to be done before attempting to set the min/max\nprotos so as you don't finish with an incorrect intermediate state.\nDaniel, are you planning to start a new thread?\n\n> One thing I noticed when looking at it is that we now have sha2_openssl.c and\n> openssl_protocol.c in src/common. For easier visual grouping of OpenSSL\n> functionality, it makes sense to me to rename sha2_openssl.c to openssl_sha2.c,\n> but that might just be pointless churn.\n\nDatabases like consistency, and so do I, so no issues from me to do a\nrename of the sha2.c file. That makes sense with the addition of the\nnew file.\n--\nMichael",
"msg_date": "Sat, 11 Jan 2020 11:49:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 11 Jan 2020, at 03:49, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hmmmmeuh. It would be perfect to rely only on OpenSSL for that part\n> to bring some sanity, and compare the results fetched from the SSL\n> context so as we don't have to worry about special cases in with the\n> GUC reload if the parameter is not set, or the parameter value is not\n> supported.\n\nI'm not convinced about this, but since there is a thread opened for discussing\nthe range check let's take it over there.\n\n> Daniel, are you planning to start a new thread?\n\nI was going to, but you beat me to it =)\n\n>> One thing I noticed when looking at it is that we now have sha2_openssl.c and\n>> openssl_protocol.c in src/common. For easier visual grouping of OpenSSL\n>> functionality, it makes sense to me to rename sha2_openssl.c to openssl_sha2.c,\n>> but that might just be pointless churn.\n> \n> Databases like consistency, and so do I, so no issues from me to do a\n> rename of the sha2.c file. That makes sense with the addition of the\n> new file.\n\nDone in the attached v3.\n\ncheers ./daniel",
"msg_date": "Tue, 14 Jan 2020 15:34:15 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 11 Jan 2020, at 03:49, Michael Paquier <michael@paquier.xyz> wrote:\n>>> One thing I noticed when looking at it is that we now have sha2_openssl.c and\n>>> openssl_protocol.c in src/common. For easier visual grouping of OpenSSL\n>>> functionality, it makes sense to me to rename sha2_openssl.c to openssl_sha2.c,\n>>> but that might just be pointless churn.\n\n>> Databases like consistency, and so do I, so no issues from me to do a\n>> rename of the sha2.c file. That makes sense with the addition of the\n>> new file.\n\n> Done in the attached v3.\n\nI'm kind of down on renaming files unless there is a *really* strong\nreason for it. It makes back-patching more difficult and it makes\nit much harder to follow the git history. And, seeing that there is\nalso a src/common/sha2.c, it seems to me that renaming sha2_openssl.c\nwill just break consistency in a different way.\n\nMaybe the problem is you've got the new file's name backwards.\nMaybe it should be protocol_openssl.c.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 09:49:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 14 Jan 2020, at 15:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>>> On 11 Jan 2020, at 03:49, Michael Paquier <michael@paquier.xyz> wrote:\n>>>> One thing I noticed when looking at it is that we now have sha2_openssl.c and\n>>>> openssl_protocol.c in src/common. For easier visual grouping of OpenSSL\n>>>> functionality, it makes sense to me to rename sha2_openssl.c to openssl_sha2.c,\n>>>> but that might just be pointless churn.\n> \n>>> Databases like consistency, and so do I, so no issues from me to do a\n>>> rename of the sha2.c file. That makes sense with the addition of the\n>>> new file.\n> \n>> Done in the attached v3.\n> \n> I'm kind of down on renaming files unless there is a *really* strong\n> reason for it. It makes back-patching more difficult and it makes\n> it much harder to follow the git history. And, seeing that there is\n> also a src/common/sha2.c, it seems to me that renaming sha2_openssl.c\n> will just break consistency in a different way.\n> \n> Maybe the problem is you've got the new file's name backwards.\n> Maybe it should be protocol_openssl.c.\n\nThats a very good argument, I’ll send a v4 with protocol_openssl.c when back at the computer.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 14 Jan 2020 16:15:14 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 14 Jan 2020, at 16:15, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 14 Jan 2020, at 15:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>>>> On 11 Jan 2020, at 03:49, Michael Paquier <michael@paquier.xyz> wrote:\n>>>>> One thing I noticed when looking at it is that we now have sha2_openssl.c and\n>>>>> openssl_protocol.c in src/common. For easier visual grouping of OpenSSL\n>>>>> functionality, it makes sense to me to rename sha2_openssl.c to openssl_sha2.c,\n>>>>> but that might just be pointless churn.\n>> \n>>>> Databases like consistency, and so do I, so no issues from me to do a\n>>>> rename of the sha2.c file. That makes sense with the addition of the\n>>>> new file.\n>> \n>>> Done in the attached v3.\n>> \n>> I'm kind of down on renaming files unless there is a *really* strong\n>> reason for it. It makes back-patching more difficult and it makes\n>> it much harder to follow the git history. And, seeing that there is\n>> also a src/common/sha2.c, it seems to me that renaming sha2_openssl.c\n>> will just break consistency in a different way.\n>> \n>> Maybe the problem is you've got the new file's name backwards.\n>> Maybe it should be protocol_openssl.c.\n> \n> Thats a very good argument, I’ll send a v4 with protocol_openssl.c when back at the computer.\n\nFiles renamed to match existing naming convention, the rest of the patch left\nunchanged.\n\ncheers ./daniel",
"msg_date": "Tue, 14 Jan 2020 23:01:00 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 11:01:00PM +0100, Daniel Gustafsson wrote:\n> Files renamed to match existing naming convention, the rest of the patch left\n> unchanged.\n\n+ if ((pg_strcasecmp(\"tlsv1\", protocol) == 0) || pg_strcasecmp(\"tlsv1.0\", protocol) == 0)\n+ return TLS1_VERSION;\n\"TLSv1.0\" is not a supported grammar in the backend. So I would just\ndrop it in libpq. It is also not documented.\n\n+ * Portions Copyright (c) 2018-2020, PostgreSQL Global Development Group\n+ *\n+ * IDENTIFICATION\n+ * src/common/protocol_openssl.c\nIt is a nobrainer to just use those lines for copyright notices\ninstead:\n * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n * Portions Copyright (c) 1994, Regents of the University of California\n\n+ <varlistentry id=\"libpq-connect-sslminprotocolversion\"\nxreflabel=\"sslminprotocolversion\">\n+ <term><literal>sslminprotocolversion</literal></term>\nGot to wonder if we had better not use underscores for those new\nparameter names as they are much longer than their cousins.\nUnderscores would make things more inconsistent.\n\n+ if (ssl_min_ver == -1)\n+ {\n+ printfPQExpBuffer(&conn->errorMessage,\n+ libpq_gettext(\"invalid minimum protocol version specified: %s\\n\"),\n+ conn->sslminprotocolversion);\n+ return -1;\n+ }\n[...]\n+ if (ssl_max_ver == -1)\n+ {\n+ printfPQExpBuffer(&conn->errorMessage,\n+ libpq_gettext(\"invalid or unsupported maximum protocol version specified: %s\\n\"),\n+ conn->sslmaxprotocolversion);\n+ return -1;\n+ }\nError messages for the min/max are inconsistent. I would just use\n\"unsupported\", because...\n\nFollowing with your complain on the other thread about the GUC\nhandling for the min/max protocol parameter. Shouldn't you validate\nthe supported values in connectOptions2() like what's done for the\nother parameters? This way, you can make the difference between an\ninvalid value and an unsupported value with two different error\nstrings. By validating the values at an earlier stage, you save a\ncouple of cycles for the application.\n\n+ <literal>TLSv1.3</literal>. The supported protocols depend on the\n+ version of <productname>OpenSSL</productname> used, older versions\n+ doesn't support the modern protocol versions.\nIncorrect grammar => s/doesn't/don't/.\n\nIt would be good to mention that the default is no value, meaning that\nthe minimum and/or the maximum are not enforced in the SSL context.\n\n+ if (conn->sslminprotocolversion)\n+ {\n[...]\n+ if (conn->sslmaxprotocolversion)\n+ {\nYou are missing two checks for empty strings here (aka strlen == 0).\nThese should be treated the same as no value to enforce the protocol\nto. (Let's not add an alias for \"any\".)\n\n+ * Convert TLS protocol versionstring to OpenSSL values\nSpace needed here => \"version string\".\n\nA nit, perhaps unnecessary, but I would use \"TLSv1.1\", etc. in the\nvalues harcoded for libpq. That's easier to grep after, and\nconsistent with the existing conventions even if you apply a\ncase-insensitive comparison.\n--\nMichael",
"msg_date": "Wed, 15 Jan 2020 14:58:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 02:58:09PM +0900, Michael Paquier wrote:\n> On Tue, Jan 14, 2020 at 11:01:00PM +0100, Daniel Gustafsson wrote:\n>> Files renamed to match existing naming convention, the rest of the patch left\n>> unchanged.\n>\n> [previous review]\n\nOne thing I remembered after sleeping on it is that we can split the\npatch into two parts: the refactoring pieces and the addition of the\noptions for libpq. The previous review mostly impacts the libpq part,\nand the split is straight-forward, so attached is a patch for only the\nrefactoring pieces with some fixes and tweaks. I have tested it with\nand without OpenSSL, using 1.0.2 and 1.1.0 on Linux and Windows\n(MSVC). Those tests have allowed me to find an error in the previous\npatch that I missed: the new files openssl.h and protocol_openssl.c\nstill declared SSL_CTX_set_min/max_proto_version as static functions,\nso compilation was broken when trying to use OpenSSL <= 1.0.2.\n\nIf that looks fine, I would like to get that part committed first.\nDaniel, any thoughts?\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 12:22:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 16 Jan 2020, at 04:22, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jan 15, 2020 at 02:58:09PM +0900, Michael Paquier wrote:\n>> On Tue, Jan 14, 2020 at 11:01:00PM +0100, Daniel Gustafsson wrote:\n>>> Files renamed to match existing naming convention, the rest of the patch left\n>>> unchanged.\n>> \n>> [previous review]\n> \n> One thing I remembered after sleeping on it is that we can split the\n> patch into two parts: the refactoring pieces and the addition of the\n> options for libpq.\n\nCorrect, they are mostly independent (the refactoring doesn't make a lot of\nsense without the follow-up patch, but the min/max patch can be kept more\nreadable without the refactoring in it as well).\n\n> The previous review mostly impacts the libpq part,\n> and the split is straight-forward, so attached is a patch for only the\n> refactoring pieces with some fixes and tweaks. I have tested it with\n> and without OpenSSL, using 1.0.2 and 1.1.0 on Linux and Windows\n> (MSVC). Those tests have allowed me to find an error in the previous\n> patch that I missed: the new files openssl.h and protocol_openssl.c\n> still declared SSL_CTX_set_min/max_proto_version as static functions,\n> so compilation was broken when trying to use OpenSSL <= 1.0.2.\n\nDoh .. thanks.\n\n> If that looks fine, I would like to get that part committed first.\n> Daniel, any thoughts?\n\nThe patch looks fine to me, I don't an issue with splitting it into a\nrefactoring patch and a TLS min/max version patch.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 16 Jan 2020 09:56:01 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 09:56:01AM +0100, Daniel Gustafsson wrote:\n> The patch looks fine to me, I don't an issue with splitting it into a\n> refactoring patch and a TLS min/max version patch.\n\nThanks, committed the refactoring part then. If the buildfarm breaks\nfor a reason or another, the window to look at is narrower than if we\nhad the full set of changes, and the git history is cleaner. I\nnoticed as well a compilation warning when compiling with OpenSSL\n1.0.2 from protocol_openssl.c because of missing declarations of the\ntwo routines because the header declaration was incorrect.\n\nCould you please rebase and fix the remaining pieces of the patch?\n--\nMichael",
"msg_date": "Fri, 17 Jan 2020 10:09:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 10:09:54AM +0900, Michael Paquier wrote:\n> Could you please rebase and fix the remaining pieces of the patch?\n\nAnd while I remember, you may want to add checks for incorrect bounds\nwhen validating the values in fe-connect.c... The same arguments as\nfor the backend part apply because we'd want to make the\nimplementation a maximum pluggable with all SSL libraries.\n--\nMichael",
"msg_date": "Fri, 17 Jan 2020 11:38:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 17 Jan 2020, at 03:38, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jan 17, 2020 at 10:09:54AM +0900, Michael Paquier wrote:\n>> Could you please rebase and fix the remaining pieces of the patch?\n> \n> And while I remember, you may want to add checks for incorrect bounds\n> when validating the values in fe-connect.c... The same arguments as\n> for the backend part apply because we'd want to make the\n> implementation a maximum pluggable with all SSL libraries.\n\nAgreed.\n\nAttached is a v5 of the patch which hopefully address all the comments raised,\nsorry for the delay.\n\ncheers ./daniel",
"msg_date": "Fri, 24 Jan 2020 12:19:31 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 12:19:31PM +0100, Daniel Gustafsson wrote:\n> Attached is a v5 of the patch which hopefully address all the comments raised,\n> sorry for the delay.\n\nThanks for the new version.\n\npsql: error: could not connect to server: invalid or unsupported\nmaximum protocol version specified: TLSv1.3\nRunning the regression tests with OpenSSL 1.0.1, 1.0.2 or 1.1.0 fails,\nbecause TLSv1.3 (TLS1_3_VERSION) is not supported in those versions.\nI think that it is better to just rely on TLSv1.2 for all that,\nknowing that the server default for the minimum bound is v1.2.\n\n+test_connect_fails(\n+ $common_connstr,\n+ \"sslrootcert=ssl/root+server_ca.crt sslmode=require\nsslminprotocolversion=TLSv1.3 sslmaxprotocolversion=tlsv1.2\",\n+ qr/invalid protocol version range/,\n+ \"connect with an incorrect range of TLS protocol versions\nleaving no versions allowed\");\n+\n+test_connect_fails(\n+ $common_connstr,\n+ \"sslrootcert=ssl/root+server_ca.crt sslmode=require\nsslminprotocolversion=TLSv1.3 sslmaxprotocolversion=tlsv1\",\n+ qr/invalid protocol version range/,\n+ \"connect with an incorrect range of TLS protocol versions\nleaving no versions allowed\");\nThis is testing twice pattern the same thing, and I am not sure if is\nis worth bothering about the special case with TLSv1 (using just a\ncomparison with pg_strcasecmp you don't actually need those special\nchecks..).\n\nTests should make sure that a failure happens when an incorrect value\nis set for sslminprotocolversion and sslmaxprotocolversion.\n\nFor readability, I think that it is better to consider NULL or empty\nvalues for the parameters to be valid. They are actually valid\nvalues, because they just get ignored when creating the connection.\n\nAdding an assertion within the routine for the protocol range check to\nmake sure that the values are valid makes the code more robust.\n\n+ {\"sslminprotocolversion\", \"PGSSLMINPROTOCOLVERSION\", NULL,\nNULL,\n+ \"SSL-Minimum-Protocol-Version\", \"\", /*\nsizeof(\"TLSv1.x\") */ 7,\n+ offsetof(struct pg_conn, sslminprotocolversion)},\n+\n+ {\"sslmaxprotocolversion\", \"PGSSLMAXPROTOCOLVERSION\", NULL,\nNULL,\n+ \"SSL-Maximum-Protocol-Version\", \"\", /*\nsizeof(\"TLSv1.x\") */ 7,\nMissing a zero-terminator in the count here. And actually\ngssencmode is wrong as well.. I'll report that on a different\nthread.\n\n+# Test min/mix protocol versions\nTypo here.\n\n+bool\n+pq_verify_ssl_protocol_option(const char *protocolversion)\n[...]\n+bool\n+pq_verify_ssl_protocol_range(const char *min, const char *max)\nBoth routines are just used in fe-connect.c to validate the connection\nparameters, so it is better to keep them static and in fe-connect.c in\nmy opinion.\n\n+ if (*(min + strlen(\"TLSv1.\")) > *(max + strlen(\"TLSv1.\")))\n+ return false;\nIt is enough to use pg_strcasecmp() here.\n\nHm. I am not sure that having a separate section \"Client Protocol\nUsage\" brings much, so I have removed this one, and added an extra\nsentence for the maximum protocol regarding its value for testing or\nprotocol compatibility.\n\nThe regression tests of postgres_fdw failed because of the new\nparameters. One update later, they run fine.\n\nSo, attached is an updated version of the patch that I have spent a\ncouple of hours polishing. What do you think?\n--\nMichael",
"msg_date": "Mon, 27 Jan 2020 15:01:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 27 Jan 2020, at 07:01, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jan 24, 2020 at 12:19:31PM +0100, Daniel Gustafsson wrote:\n>> Attached is a v5 of the patch which hopefully address all the comments raised,\n>> sorry for the delay.\n> \n> Thanks for the new version.\n\nThanks for review and hackery!\n\n> psql: error: could not connect to server: invalid or unsupported\n> maximum protocol version specified: TLSv1.3\n> Running the regression tests with OpenSSL 1.0.1, 1.0.2 or 1.1.0 fails,\n> because TLSv1.3 (TLS1_3_VERSION) is not supported in those versions.\n> I think that it is better to just rely on TLSv1.2 for all that,\n> knowing that the server default for the minimum bound is v1.2.\n\nYes, of course, brainfade on my part.\n\n> + {\"sslminprotocolversion\", \"PGSSLMINPROTOCOLVERSION\", NULL,\n> NULL,\n> + \"SSL-Minimum-Protocol-Version\", \"\", /*\n> sizeof(\"TLSv1.x\") */ 7,\n> + offsetof(struct pg_conn, sslminprotocolversion)},\n> +\n> + {\"sslmaxprotocolversion\", \"PGSSLMAXPROTOCOLVERSION\", NULL,\n> NULL,\n> + \"SSL-Maximum-Protocol-Version\", \"\", /*\n> sizeof(\"TLSv1.x\") */ 7,\n> Missing a zero-terminator in the count here. And actually\n> gssencmode is wrong as well.. I'll report that on a different\n> thread.\n\nNice catch, I plead guilty to copy-pasting and transferring the error.\n\n> +bool\n> +pq_verify_ssl_protocol_option(const char *protocolversion)\n> [...]\n> +bool\n> +pq_verify_ssl_protocol_range(const char *min, const char *max)\n> Both routines are just used in fe-connect.c to validate the connection\n> parameters, so it is better to keep them static and in fe-connect.c in\n> my opinion.\n\nOk. I prefer to keep the TLS code collected in fe-secure.c, but I don't have\nstrong enough opinions to kick up a fuzz.\n\n> Hm. I am not sure that having a separate section \"Client Protocol\n> Usage\" brings much, so I have removed this one, and added an extra\n> sentence for the maximum protocol regarding its value for testing or\n> protocol compatibility.\n\nI'm not convinced, this forces the reader to know what to look for (the\nconnection parameters) rather than being informed. If anything, I think we\nneed more explanatory sections in the docs.\n\n> The regression tests of postgres_fdw failed because of the new\n> parameters. One update later, they run fine.\n\nDoh, thanks.\n\n> So, attached is an updated version of the patch that I have spent a\n> couple of hours polishing. What do you think?\n\nOverall a +1 on this version, thanks for picking it up!\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 27 Jan 2020 09:49:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 09:49:09AM +0100, Daniel Gustafsson wrote:\n>> On 27 Jan 2020, at 07:01, Michael Paquier <michael@paquier.xyz> wrote:\n> Ok. I prefer to keep the TLS code collected in fe-secure.c, but I don't have\n> strong enough opinions to kick up a fuzz.\n\nThey are parameter-related, so fe-connect.c made the most sense to me.\nThe routine checking after the range makes the code more readable IMO\neven if we only use it in one place.\n\n>> Hm. I am not sure that having a separate section \"Client Protocol\n>> Usage\" brings much, so I have removed this one, and added an extra\n>> sentence for the maximum protocol regarding its value for testing or\n>> protocol compatibility.\n> \n> I'm not convinced, this forces the reader to know what to look for (the\n> connection parameters) rather than being informed. If anything, I think we\n> need more explanatory sections in the docs.\n> \n>> So, attached is an updated version of the patch that I have spent a\n>> couple of hours polishing. What do you think?\n> \n> Overall a +1 on this version, thanks for picking it up!\n\nThanks. I have committed the bulk of the changes. As mentioned\npreviously, I still have doubts about the value of the new section for\nthe new protocol usage. Once reworded a bit, I finish with the\nattached, and the following paragraph for libpq.sgml:\n+ <sect2>\n+ <title>Client Protocol Usage</title>\n+ <para>\n+ When connecting using SSL, the client and server negotiate which protocol\n+ to use for the connection. <productname>PostgreSQL</productname> supports\n+ <literal>TLSv1</literal>, <literal>TLSv1.1</literal>,\n+ <literal>TLSv1.2</literal> and <literal>TLSv1.3</literal>, but the\n+ protocols available depend on the version of\n+ <productname>OpenSSL</productname> that the client and the backend are\n+ using. The minimum requested version can be specified with\n+ <literal>sslminprotocolversion</literal>, which will ensure that the\n+ connection uses that protocol version or higher. The maximum requested\n+ version can be specified with <literal>sslmaxprotocolversion</literal>.\n+ </para>\n+ </sect2>\n\nNow, we already mention in the docs which values the min and max\nbounds support, and that the version of OpenSSL used by the backend\nand the frontend are impacted by that depending on what version of\nOpenSSL one or the other link to. The remaining piece is that the\nclient and the server negotiate the protocol they use, which is an\nobvious fact, at least to me..\n--\nMichael",
"msg_date": "Tue, 28 Jan 2020 12:53:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 28 Jan 2020, at 04:53, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Now, we already mention in the docs which values the min and max\n> bounds support, and that the version of OpenSSL used by the backend\n> and the frontend are impacted by that depending on what version of\n> OpenSSL one or the other link to. The remaining piece is that the\n> client and the server negotiate the protocol they use, which is an\n> obvious fact, at least to me..\n\nYou don't really qualify as the target audience for basic, yet not always\nuniversally known/understood, sections in the documentation though =) I've\nheard enough complaints that it's complicated to set up that I think we need to\nmake the docs more digestable, but if noone else +1's then lets drop it.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 28 Jan 2020 11:29:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 11:29:39AM +0100, Daniel Gustafsson wrote:\n> You don't really qualify as the target audience for basic, yet not always\n> universally known/understood, sections in the documentation though =)\n\nLikely I don't.\n\n> I've heard enough complaints that it's complicated to set up that I\n> think we need to make the docs more digestable, but if noone else\n> +1's then lets drop it.\n\nSure. Now I am pretty sure that we would need a bit more than just\nsaying that the SSL protocol is negotiated between the backend and\nlibpq if we add a new section.\n--\nMichael",
"msg_date": "Wed, 29 Jan 2020 15:23:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "Can we reconsider whether we really want to name the new settings like \n\"sslminprotocolversion\", or whether we could add some underscores, both \nfor readability and for consistency with the server-side options?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Apr 2020 12:56:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 24 Apr 2020, at 12:56, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> Can we reconsider whether we really want to name the new settings like \"sslminprotocolversion\", or whether we could add some underscores, both for readability and for consistency with the server-side options?\n\nThat was brought up by Michael in the thread, but none of us followed up on it\nit seems. The current name was chosen to be consistent with the already\nexisting ssl* client-side settings, but I don't really have strong opinions on\nif that makes sense or not. Perhaps use ssl_m{in|max}_protocolversion to make\nit more readable?\n\nThe attached renames the userfacing setting, but keeps the environment variable\nwithout underscores as most settings have env vars without underscores.\n\ncheers ./daniel",
"msg_date": "Fri, 24 Apr 2020 14:03:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Fri, Apr 24, 2020 at 02:03:04PM +0200, Daniel Gustafsson wrote:\n> That was brought up by Michael in the thread, but none of us followed up on it\n> it seems. The current name was chosen to be consistent with the already\n> existing ssl* client-side settings, but I don't really have strong opinions on\n> if that makes sense or not. Perhaps use ssl_m{in|max}_protocolversion to make\n> it more readable?\n\nThere was no hard push in favor of this comment so I did not insist,\nbut I am not wedded to the existing connection parameter names.\n\n- {\"sslminprotocolversion\", \"PGSSLMINPROTOCOLVERSION\", NULL, NULL,\n+ {\"ssl_min_protocolversion\", \"PGSSLMINPROTOCOLVERSION\", NULL, NULL,\nShouldn't that actually be \"ssl_min_protocol_version\" with one extra\nunderscore?\n\n> The attached renames the userfacing setting, but keeps the environment variable\n> without underscores as most settings have env vars without underscores.\n\nThere are two in this case: PG_COLOR and PG_COLORS. For readability\nit could make sense to use something like PG_SSL_MIN_PROTOCOL_VERSION\nor PGSSL_MIN_PROTOCOL_VERSION, but like Daniel I'd rather keep the env\nvariables without underscores.\n--\nMichael",
"msg_date": "Sat, 25 Apr 2020 09:29:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On 2020-04-24 14:03, Daniel Gustafsson wrote:\n>> On 24 Apr 2020, at 12:56, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> Can we reconsider whether we really want to name the new settings like \"sslminprotocolversion\", or whether we could add some underscores, both for readability and for consistency with the server-side options?\n> \n> That was brought up by Michael in the thread, but none of us followed up on it\n> it seems. The current name was chosen to be consistent with the already\n> existing ssl* client-side settings, but I don't really have strong opinions on\n> if that makes sense or not. Perhaps use ssl_m{in|max}_protocolversion to make\n> it more readable?\n\nThe names on the backend side are ssl_{min|max|_protocol_version.\n\n> The attached renames the userfacing setting, but keeps the environment variable\n> without underscores as most settings have env vars without underscores.\n\nKeeping the environment variable as is seems fine (also consistent with \n\"channel_binding\").\n\nI would, however, prefer to also rename the internal symbols.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 26 Apr 2020 14:01:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "> On 26 Apr 2020, at 14:01, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-04-24 14:03, Daniel Gustafsson wrote:\n>>> On 24 Apr 2020, at 12:56, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>> \n>>> Can we reconsider whether we really want to name the new settings like \"sslminprotocolversion\", or whether we could add some underscores, both for readability and for consistency with the server-side options?\n>> That was brought up by Michael in the thread, but none of us followed up on it\n>> it seems. The current name was chosen to be consistent with the already\n>> existing ssl* client-side settings, but I don't really have strong opinions on\n>> if that makes sense or not. Perhaps use ssl_m{in|max}_protocolversion to make\n>> it more readable?\n> \n> The names on the backend side are ssl_{min|max|_protocol_version.\n\nThat was the preferred name by Michael too elsewhere in the thread, so went\nahead and made it so.\n\n>> The attached renames the userfacing setting, but keeps the environment variable\n>> without underscores as most settings have env vars without underscores.\n> \n> Keeping the environment variable as is seems fine (also consistent with \"channel_binding\").\n> \n> I would, however, prefer to also rename the internal symbols.\n\nDone in the attached v2.\n\ncheers ./daniel",
"msg_date": "Sun, 26 Apr 2020 23:20:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Sun, Apr 26, 2020 at 11:20:01PM +0200, Daniel Gustafsson wrote:\n> That was the preferred name by Michael too elsewhere in the thread, so went\n> ahead and made it so.\n\nThanks Daniel.\n\n>> I would, however, prefer to also rename the internal symbols.\n> \n> Done in the attached v2.\n\nWhat you have here looks fine to me. Peter, what do you think?\n--\nMichael",
"msg_date": "Mon, 27 Apr 2020 14:45:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On 2020-04-27 07:45, Michael Paquier wrote:\n> On Sun, Apr 26, 2020 at 11:20:01PM +0200, Daniel Gustafsson wrote:\n>> That was the preferred name by Michael too elsewhere in the thread, so went\n>> ahead and made it so.\n> \n> Thanks Daniel.\n> \n>>> I would, however, prefer to also rename the internal symbols.\n>>\n>> Done in the attached v2.\n> \n> What you have here looks fine to me. Peter, what do you think?\n\nThis looks good to me, except that\n\nxreflabel=\"ssl-min-protocol-version\"\n\netc. needs to be changed to use underscores.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 29 Apr 2020 22:33:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
},
{
"msg_contents": "On Wed, Apr 29, 2020 at 10:33:26PM +0200, Peter Eisentraut wrote:\n> This looks good to me, except that\n> \n> xreflabel=\"ssl-min-protocol-version\"\n> \n> etc. needs to be changed to use underscores.\n\nIndeed, thanks. I have fixed this part and applied the patch.\n--\nMichael",
"msg_date": "Thu, 30 Apr 2020 13:48:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting min/max TLS protocol in clientside libpq"
}
] |
[
{
"msg_contents": "Hi,\nI believe PostgreSQL can benefit from changing the alert level of compilation warnings.\nThe current Level3 level for windows does not show any alerts, but that does not mean that there are no problems.\nChanging the level to Level4 and its equivalent for GCC in Unix environments will show many warnings for shadow variables, including global variables.\nTrue, there will also be many silly alerts that can be safely disabled.\nShadow variables, although they may not currently represent bugs, may be hiding errors, or at the very least, it is a waste of variable declaration.\nWith the current Level3 level, development is no longer checking and correcting shadow variables.\n\nAny comments?\n\nBest regards,\nRanier Vilela\n\n",
"msg_date": "Wed, 4 Dec 2019 01:24:28 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 8:24 PM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> I believe PostgreSQL can benefit from changing the alert level of compilation warnings.\n> The current Level3 level for windows does not show any alerts, but that does not mean that there are no problems.\n> Changing the level to Level4 and its equivalent for GCC in Unix environments will show many warnings for shadow variables, including global variables.\n> True, there will also be many silly alerts that can be safely disabled.\n> Shadow variables, although they may not currently represent bugs, may be hiding errors, or at the very least, it is a waste of variable declaration.\n> With the current Level3 level, development is no longer checking and correcting shadow variables.\n>\n> Any comments?\n\nMost of us don't develop on Windows, so changing warning levels on\nWindows won't really affect what developers see on their own machines,\nand thus probably doesn't have much value. It might be a good idea to\ntry to clean up some/many cases of shadowed variables, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Dec 2019 10:28:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Robert wrote:\n>Most of us don't develop on Windows, so changing warning levels on\n>Windows won't really affect what developers see on their own machines,\n>and thus probably doesn't have much value. \nYes the report is from msvc 2017.\nEven so, there is some failure to review or compile in Unix environment, because there are so many cases.\n-Wshadow with GCC can show the alerts.\n\n>It might be a good idea to>try to clean up some/many cases of shadowed >variables, though.\nInterested in submitting the msvc 2017 report?\n\nRanier Vilela\n\n",
"msg_date": "Thu, 5 Dec 2019 16:26:32 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 11:26 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> Even so, there is some failure to review or compile in Unix environment, because there are so many cases.\n> -Wshadow with GCC can show the alerts.\n\nI mean, compiler warnings are not errors, and there's no requirement\nthat we fix every warning. I compile with -Wall -Werror regularly and\nthat works fine. I don't necessarily feel like I have to turn on more\nwarnings that aren't shown by default on the platforms I use. One way\nof looking at it: if a warning isn't enabled by -Wall, it's probably\nsomething that either isn't that important or would generate too many\nfalse positives.\n\n> Interested in submitting the msvc 2017 report?\n\nI think if this is an issue you care about, it's up to you to think of\ndoing something about it, like going through the report and submitting\npatches for whichever cases you think need to be addressed. Cleaning\nup stuff like this is potentially a lot of work, and I struggle to\nkeep up with all the work I've already got.\n\nIf you do decide to work on this, I recommend against preparing a\nsingle giant patch that changes every single one blindly. Try to think\nabout which cases are the most egregious/dangerous and propose patches\nfor those first. If those are accepted then you can move on to other\ncases.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Dec 2019 13:41:20 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "\n\nOn 12/3/19 5:24 PM, Ranier Vilela wrote:\n> Hi,\n> I believe PostgreSQL can benefit from changing the alert level of compilation warnings.\n> The current Level3 level for windows does not show any alerts, but that does not mean that there are no problems.\n> Changing the level to Level4 and its equivalent for GCC in Unix environments will show many warnings for shadow variables, including global variables.\n> True, there will also be many silly alerts that can be safely disabled.\n> Shadow variables, although they may not currently represent bugs, may be hiding errors, or at the very least, it is a waste of variable declaration.\n> With the current Level3 level, development is no longer checking and correcting shadow variables.\n> \n> Any comments?\n\nI suggested increasing the default warnings in an email some time ago,\nto which Tom made reasonable objections. You might want to take a\nlook at his comments, and consider if you can overcome the concerns\nhe had:\n\nhttps://www.postgresql.org/message-id/25938.1487367117%40sss.pgh.pa.us\n\nand\n\nhttps://www.postgresql.org/message-id/30007.1487374499%40sss.pgh.pa.us\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 5 Dec 2019 13:06:38 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 12/3/19 5:24 PM, Ranier Vilela wrote:\n>> Any comments?\n\n> I suggested increasing the default warnings in an email some time ago,\n> to which Tom made reasonable objections.\n\nYes, that whole thread is worth a read in this context:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D15e9L695yVCO-_OkBVbsPupyXqzYWzzDmj-bdJ6o2%2BPw%40mail.gmail.com\n\nThe original concern about --disable-integer-datetimes is history\nnow, but I think the variability of error reports between compilers\nis still instructive and relevant.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Dec 2019 18:41:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 01:41:20PM -0500, Robert Haas wrote:\n> If you do decide to work on this, I recommend against preparing a\n> single giant patch that changes every single one blindly. Try to think\n> about which cases are the most egregious/dangerous and propose patches\n> for those first. If those are accepted then you can move on to other\n> cases.\n\n+1. An case-by-case analysis is key here because it depends on the\ncontext of the code. I am ready to bet that we don't care about most\nof them, still that there are cases which actually matter a lot.\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 10:11:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "De: Mark Dilger <hornschnorter@gmail.com>\nEnviado: quinta-feira, 5 de dezembro de 2019 21:06\n>I suggested increasing the default warnings in an email some time ago,\n>to which Tom made reasonable objections. You might want to take a\n>look at his comments, and consider if you can overcome the concerns\n>he had:\nI understand Tom's considerations.\nWhat I mean is, everyone already knows, it's easier and safer to fix this kind of mistake early.\nI'll do as Robert asked, but as with global variables, it's hard to fix.\nWhat did the original author want, use the global variable or not use it by overriding the name.\nIf it was to use the global variable, it will affect the behavior of the function, I believe.\n\nregards,\nRanier Vilela\n\n",
"msg_date": "Fri, 6 Dec 2019 12:59:15 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 7:59 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> What did the original author want, use the global variable or not use it by overriding the name.\n> If it was to use the global variable, it will affect the behavior of the function, I believe.\n\nWell, you haven't provided any examples, so it's hard to be sure, but\nI suspect that the vast majority of these are not actually bugs, but\njust name collisions that don't really matter. Some of them could even\nbe Windows-specific things. For example, if Windows - or any other\nplatform - happened to have a variable declared in a library header\nfile someplace that is relatively commonly used within PostgreSQL as a\nlocal variable name (e.g. \"lc\"), it would produce tons of name\ncollisions, none of which would be bugs.\n\nThe thing is, it's well-known that this is not good programming\npractice, and I doubt that any committer would intentionally commit\ncode that used the same variable name for a file-level global and a\nlocal variable in that same file. Perhaps a few such cases have crept\nin by accident, but I bet they are rare. What's probably more likely\nis that somebody - either a PostgreSQL developer or a Microsoft\ndeveloper - carelessly exposed a global name that's not very\ndistinctive, and it then collided -- either then or later -- with some\nlocal variables in various places within the PostgreSQL code. If those\nare names exposed by PostgreSQL, we should just rename the global\nvariables we exposed to have more distinctive names. If they're\nexposed by Microsoft, we don't have that option, so we either have to\nrename the local variables that shadow them, or decide that we don't\ncare.\n\nBased on previous discussion in this forum, my guess is that popular\nsentiment will depend quite a bit on how reasonable it seems that\nMicrosoft chose to use the name in the first place. If there's an\n\"extern int i;\" declaration someplace in a Windows header file, we are\nnot for that reason going to abandon our practice of using \"i\" for\nloop variables; we're (probably) just going to say nasty things about\nMicrosoft and keep doing what we're doing. If there's an \"extern int\n__msft_ftw;\" declaration in a Windows header file and for some reason\nwe've used that same name in our code, we're going to decide we were\ndumb to use that as a name and change it. The amount of code churn\nalso plays a role. People will be reluctant to change thousands of\nlines of PostgreSQL code to work around Microsoft-specific problems,\nbut if it's only a little bit of code then people won't mind very\nmuch.\n\nMaybe you want to post a few examples. It's hard to discuss in the abstract.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 6 Dec 2019 10:48:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Robert Haas wrote:\n>Maybe you want to post a few examples. It's hard to discuss in the abstract.\nI am working on the patch.\nI think this is a great example.\nI do not know if it is better to rename the local parameter, or if it should be renamed the global variable.\n\nline: 68\nvar char **synchronous_commit\nbackend/commands/subscriptioncmds.c\n\nglobal var declared here:\n/include/access/xact.h(82)\n\nOne question, is it better to submit the patch on this topic, or create a new one?\n\nregards,\nRanier Vilela\n\n",
"msg_date": "Fri, 6 Dec 2019 23:18:14 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "\n\nOn 12/6/19 3:18 PM, Ranier Vilela wrote:\n> Robert Haas wrote:\n>> Maybe you want to post a few examples. It's hard to discuss in the abstract.\n> I am working on the patch.\n> I think this is a great example.\n> I do not know if it is better to rename the local parameter, or if it should be renamed the global variable.\n> \n> line: 68\n> var char **synchronous_commit\n> backend/commands/subscriptioncmds.c\n> \n> global var declared here:\n> /include/access/xact.h(82)\n\nThe local variables in subscriptioncmds.c named \"synchronous_commit\"\nappear more times in that one file than the global variable appears\nin total in the rest of the system, but that doesn't include other\nreferences to the guc in code comments, in user facing strings, etc.\n\nI think it is better to change this just in subscriptioncmds.c than\nto change the global variable name everywhere else. I also tend to\nagree with you that shadowing the global variable is bad practice.\n\n> One question, is it better to submit the patch on this topic, or create a new one?\n\nYou appear to be planning to submit lots of patches about lots of\ndifferent shadowed variables. If you start a new thread for this\nparticular variable, it seems you'd probably do that again and again\nfor the other ones, and that might be tedious for readers of the\n-hackers list who aren't interested. To start, I'd prefer to see\nthe patch on this thread.\n\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Fri, 6 Dec 2019 16:44:47 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "This is the first part of the variable shadow fixes.\nBasically it consists of renaming the variables in collision with the global ones, with the minimum change in the semantics.\n\nmake check pass all the tests.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 7 Dec 2019 23:42:38 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "\n\nOn 12/7/19 3:42 PM, Ranier Vilela wrote:\n> This is the first part of the variable shadow fixes.\n> Basically it consists of renaming the variables in collision with the global ones, with the minimum change in the semantics.\n> \n> make check pass all the tests.\n\nI think it would be better to split this patch into separate files,\none for each global variable that is being shadowed. The reason\nI say so is apparent looking at the first one in the patch,\nRedoRecPtr. This process global variable is defined in xlog.c:\n\n static XLogRecPtr RedoRecPtr;\n\nand then, somewhat surprisingly, passed around between static\nfunctions defined within that same file, such as:\n\n RemoveOldXlogFiles(...)\n\nwhich in the current code only ever gets a copy of the global,\nwhich begs the question why it needs this passed as a parameter\nat all. All the places calling RemoveOldXlogFiles are within\nthis file, and all of them pass the global, so why bother?\n\nAnother function within xlog.c behaves similarly:\n\n RemoveXlogFile(...)\n\nOnly here, the callers sometimes pass the global RedoRecPtr\n(though indirectly, since they themselves received it as an\nargument) and sometimes they pass in InvalidXLogRecPtr, which\nis just a constant:\n\n src/include/access/xlogdefs.h:#define InvalidXLogRecPtr\t0\n\nSo it might make sense to remove the parameter from this\nfunction, too, and replace it with a flag parameter named\nsomething like \"is_valid\", or perhaps split the function\ninto two functions, one for valid and one for invalid.\n\nI'm not trying to redesign xlog.c's functions in this email\nthread, but only suggesting that these types of arguments\nmay ensue for each global variable in your patch, and it will\nbe easier for a committer to know if there is a consensus\nabout any one of them than about the entire set. When I\nsuggested you do this patch set all on this thread, I was\nstill expecting multiple patches, perhaps named along the\nlines of:\n\n unshadow.RedoRecPtr.patch.1\n unshadow.wal_segment_size.patch.1\n unshadow.synchronous_commit.patch.1\n unshadow.wrconn.patch.1\n unshadow.progname.patch.1\n unshadow.am_syslogger.patch.1\n unshadow.days.patch.1\n unshadow.months.patch.1\n\netc. I'm uncomfortable giving you negative feedback of this\nsort, since I think you are working hard to improve postgres\nand I really appreciate it, so later tonight I'll try to come\nback, split your patch for you as described, add an entry to\nthe commitfest if you haven't already, and mark myself as a\nreviewer.\n\nThanks again for the hard work and the patch!\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sun, 8 Dec 2019 10:25:21 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Ranier Vilela <ranier_gyn@hotmail.com> writes:\n> This is the first part of the variable shadow fixes.\n> Basically it consists of renaming the variables in collision with the global ones, with the minimum change in the semantics.\n\nI don't think I'm actually on board with the goal here.\n\nBasically, if we take this seriously, we're throwing away the notion of\nnested variable scopes and programming as if we had just a flat namespace.\nThat wasn't any fun when we had to do it back in assembly-code days, and\nI don't see a good reason to revert to that methodology today.\n\nIn a few of these cases, like the RedoRecPtr changes, there *might* be\nan argument that there's room for confusion about whether the code could\nhave meant to reference the similarly-named global variable. But it's\njust silly to make that argument in places like your substitution of\n/days/ndays/ in date.c.\n\nBased on this sample, I reject the idea that we ought to be trying to\neliminate this class of warnings just because somebody's compiler can be\ninduced to emit them. If you want to make a case-by-case argument that\nparticular situations of this sort are bad programming style, let's have\nthat argument by all means. But it needs to be case-by-case, not just\ndropping a large patch on us containing a bunch of unrelated changes\nand zero specific justification for any of them.\n\nIOW, I don't think you've taken to heart Robert's upthread advice that\nthis needs to be case-by-case and based on literary not mechanical\nconsiderations.\n\nBTW, if we do go forward with changing the RedoRecPtr uses, I'm not\nin love with \"XRedoRecPtr\" as a replacement parameter name; it conveys\nnothing much, and the \"X\" prefix is already overused in that area of\nthe code. Perhaps \"pRedoRecPtr\" would be a better choice? Or maybe\nmake the local variables be all-lower-case \"redorecptr\", which would\nfit well in context in places like\n\n-RemoveXlogFile(const char *segname, XLogRecPtr RedoRecPtr, XLogRecPtr endptr)\n+RemoveXlogFile(const char *segname, XLogRecPtr XRedoRecPtr, XLogRecPtr endptr)\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Dec 2019 13:51:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> I think it would be better to split this patch into separate files,\n> one for each global variable that is being shadowed. The reason\n> I say so is apparent looking at the first one in the patch,\n> RedoRecPtr. This process global variable is defined in xlog.c:\n> static XLogRecPtr RedoRecPtr;\n> and then, somewhat surprisingly, passed around between static\n> functions defined within that same file, such as:\n> RemoveOldXlogFiles(...)\n> which in the current code only ever gets a copy of the global,\n> which begs the question why it needs this passed as a parameter\n> at all. All the places calling RemoveOldXlogFiles are within\n> this file, and all of them pass the global, so why bother?\n\nI was wondering about that too. A look in the git history seems\nto say that it is the fault of the fairly-recent commit d9fadbf13,\nwhich did things like this:\n\n /*\n * Recycle or remove all log files older or equal to passed segno.\n *\n- * endptr is current (or recent) end of xlog, and PriorRedoRecPtr is the\n- * redo pointer of the previous checkpoint. These are used to determine\n+ * endptr is current (or recent) end of xlog, and RedoRecPtr is the\n+ * redo pointer of the last checkpoint. These are used to determine\n * whether we want to recycle rather than delete no-longer-wanted log files.\n */\n static void\n-RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr)\n+RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr RedoRecPtr, XLogRecPtr endptr)\n {\n DIR *xldir;\n struct dirent *xlde;\n\nThat is, these arguments *used* to be a different LSN pointer, and that\ncommit changed them to be mostly equal to RedoRecPtr, and made what\nseems like a not very well-advised renaming to go with that.\n\n> So it might make sense to remove the parameter from this\n> function, too, and replace it with a flag parameter named\n> something like \"is_valid\", or perhaps split the function\n> into two functions, one for valid and one for invalid.\n\nDon't think I buy that. The fact that these arguments were until recently\ndifferent from RedoRecPtr suggests that they might someday be different\nagain, whereupon we'd have to laboriously revert such a parameter redesign.\nI think I'd just go for names that don't have a hard implication that\nthe parameter values are the same as any particular global variable.\n\n> I'm not trying to redesign xlog.c's functions in this email\n> thread, but only suggesting that these types of arguments\n> may ensue for each global variable in your patch,\n\nIndeed. Once again, these are case-by-case issues, not something\nthat can be improved by a global search-and-replace without much\nconsideration for the details of each case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Dec 2019 14:14:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": ">I think it would be better to split this patch into separate files,\n>one for each global variable that is being shadowed. \nOk, I agree.\n\n> The reasonI say so is apparent looking at the first one in the patch,\n>RedoRecPtr. This process global variable is defined in xlog.c:\n> static XLogRecPtr RedoRecPtr;\n>and then, somewhat surprisingly, passed around between static\n>functions defined within that same file, such as:\n> RemoveOldXlogFiles(...)\n>which in the current code only ever gets a copy of the global,\n>which begs the question why it needs this passed as a parameter\n>at all. All the places calling RemoveOldXlogFiles are within\n>this file, and all of them pass the global, so why bother?\nIn general I do not agree to use global variables. But I understand when you use it, I believe it is a necessary evil.\nSo I think that maybe the original author, has the same thought and when using a local parameter to pass the variable, and there is a way to further eliminate the use of the global variable, maybe it was unfortunate in choosing the name.\nAnd what it would do in this case, with the aim of eliminating the global variable in the future.\nIn my own systems, I make use of only one global variable, and in many functions I pass as a parameter, with another name.\n\n>Another function within xlog.c behaves similarly:\n> RemoveXlogFile(...)\n>Only here, the callers sometimes pass the global RedoRecPtr\n>(tough indirectly, since they themselves received it as an\n>argument) and sometimes they pass in InvalidXLogRecPtr, which\n>is just a constant:\n> src/include/access/xlogdefs.h:#define InvalidXLogRecPtr 0\n>So it might make sense to remove the parameter from this\n>function, too, and replace it with a flag parameter named\n>something like \"is_valid\", or perhaps split the function\n>into two functions, one for valid and one for invalid.\nAgain in this case, it would have to be checked whether postgres really will make use of the global variable forever.\nWhich for me is a bad design.\n\nAnother complicated case of global variable is PGconn * conn. It is defined as global somewhere, but there is widespread use of the name \"conn\" in many places in the code, many in / bin, so if it is in the interest of postgres to correct this, it would be better to rename the global variable to something like pg_conn, or gconn.\n\n>I'm not trying to redesign xlog.c's functions in this email\n>thread, but only suggesting that these types of arguments\n>may ensue for each global variable in your patch, and it will\n>be easier for a committer to know if there is a consensus\n>about any one of them than about the entire set. When I\n>suggested you do this patch set all on this thread, I was\n>still expecting multiple patches, perhaps named along the\n>lines of:\n> unshadow.RedoRecPtr.patch.1\n> unshadow.wal_segment_size.patch.1\n> unshadow.synchronous_commit.patch.1\n> unshadow.wrconn.patch.1\n> unshadow.progname.patch.1\n> unshadow.am_syslogger.patch.1\n> unshadow.days.patch.1\n> unshadow.months.patch.1\n>etc. I'm uncomfortable giving you negative feedback of this\n>sort, since I think you are working hard to improve postgres\n>and I really appreciate it, so later tonight I'll try to come\n>back, split your patch for you as described, add an entry to\n>the commitfest if you haven't already, and mark myself as a\n>reviewer.\nI appreciate your help.\n\n>Thanks again for the hard work and the patch!\nYou are welcome.\n\nregards,\nRanier Vilela\n\n--\nMark Dilger\n\n\n",
"msg_date": "Mon, 9 Dec 2019 01:16:22 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": ">I don't think I'm actually on board with the goal here.\nOk, I understand.\n\n>Basically, if we take this seriously, we're throwing away the notion of\n>nested variable scopes and programming as if we had just a flat namespace.\n>That wasn't any fun when we had to do it back in assembly-code days, and\n>I don't see a good reason to revert to that methodology today.\nIn general I think the use global variables its a bad design. But I understand the use.\n\n>In a few of these cases, like the RedoRecPtr changes, there *might* be\n>an argument that there's room for confusion about whether the code could\n>have meant to reference the similarly-named global variable. But it's\n>just silly to make that argument in places like your substitution of\n>/days/ndays/ in date.c.\nI would rather fix everything, including days name.\n\n>Based on this sample, I reject the idea that we ought to be trying to\n>eliminate this class of warnings just because somebody's compiler can be\n>induced to emit them. If you want to make a case-by-case argument that\n>particular situations of this sort are bad programming style, let's have\n>that argument by all means. But it needs to be case-by-case, not just\n>dropping a large patch on us containing a bunch of unrelated changes\n>and zero specific justification for any of them.\nThis is why I suggested activating the alert in the development and review process, so that any cases that arose would be corrected very early.\n\n>IOW, I don't think you've taken to heart Robert's upthread advice that\n>this needs to be case-by-case and based on literary not mechanical\n>considerations.\nOk, right.\nBut I was working on the second class of shadow variables, which are local variables, within the function itself, where the patch would lead to a removal of the variable declaration, maintaining the same logic and functionality, which would lead to better performance and reduction. of memory usage as well as very small.\nIn that case, too, would it have to be case by case?\nWow, there are many and many shadow variables ...\n\n>BTW, if we do go forward with changing the RedoRecPtr uses, I'm not\n>in love with \"XRedoRecPtr\" as a replacement parameter name; it conveys\n>nothing much, and the \"X\" prefix is already overused in that area of\n>the code. Perhaps \"pRedoRecPtr\" would be a better choice? Or maybe\n>make the local variables be all-lower-case \"redorecptr\", which would\n>fit well in context in places like\npRedoRecPtr, It's perfect for me.\n\nregards,\nRanier Vilela\n\n",
"msg_date": "Mon, 9 Dec 2019 01:30:33 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Hello.\n\nAt Mon, 9 Dec 2019 01:30:33 +0000, Ranier Vilela <ranier_gyn@hotmail.com> wrote in \n> >I don't think I'm actually on board with the goal here.\n> Ok, I understand.\n> \n> >Basically, if we take this seriously, we're throwing away the notion of\n> >nested variable scopes and programming as if we had just a flat namespace.\n> >That wasn't any fun when we had to do it back in assembly-code days, and\n> >I don't see a good reason to revert to that methodology today.\n> In general I think the use global variables its a bad design. But I understand the use.\n\nThe file-scoped variable is needed to be process-persistent in any\nway. If we inhibit them, the upper-modules need to create the\npersistent area instead, for example, by calling XLogInitGlobals() or\nsuch, which makes things messier. Globality doens't necessarily mean\nevil and there're reasons for -Wall doesn't warn the case. I believe\nwe, and especially committers are not who should be kept away from\nknives for the reason that knives generally have a possibility to\ninjure someone.\n\n> >In a few of these cases, like the RedoRecPtr changes, there *might* be\n> >an argument that there's room for confusion about whether the code could\n> >have meant to reference the similarly-named global variable. But it's\n> >just silly to make that argument in places like your substitution of\n> >/days/ndays/ in date.c.\n> I would rather fix everything, including days name.\n\nI might be too accustomed there, but the functions that define\noverriding locals don't modify the local variables and only the\nfunctions that don't override the globals modifies the glboals. I see\nno significant confusion here. By the way changes like \"conf_file\" ->\n\"conffile\" seems really useless as a fix patch.\n\n> >Based on this sample, I reject the idea that we ought to be trying to\n> >eliminate this class of warnings just because somebody's compiler can be\n> >induced to emit them. If you want to make a case-by-case argument that\n> >particular situations of this sort are bad programming style, let's have\n> >that argument by all means. But it needs to be case-by-case, not just\n> >dropping a large patch on us containing a bunch of unrelated changes\n> >and zero specific justification for any of them.\n> This is why I suggested activating the alert in the development and review process, so that any cases that arose would be corrected very early.\n\nI don't think it contributes to the argument on programming style in\nany way.\n\n> >IOW, I don't think you've taken to heart Robert's upthread advice that\n> >this needs to be case-by-case and based on literary not mechanical\n> >considerations.\n> Ok, right.\n> But I was working on the second class of shadow variables, which are local variables, within the function itself, where the patch would lead to a removal of the variable declaration, maintaining the same logic and functionality, which would lead to better performance and reduction. of memory usage as well as very small.\n> In that case, too, would it have to be case by case?\n> Wow, there are many and many shadow variables ...\n\nAs Robert said, they are harmless as far as we notice. Actual bugs\ncaused by variable overriding would be welcomed to fix. I don't\nbelieve \"lead to better performance and reduction (of code?)\" without\nan evidence since modern compilers I think are not so stupid. Even if\nany, performance change in such extent doesn't support the proposal to\nremove variable overrides that way.\n\n> >BTW, if we do go forward with changing the RedoRecPtr uses, I'm not\n> >in love with \"XRedoRecPtr\" as a replacement parameter name; it conveys\n> >nothing much, and the \"X\" prefix is already overused in that area of\n> >the code. Perhaps \"pRedoRecPtr\" would be a better choice? Or maybe\n> >make the local variables be all-lower-case \"redorecptr\", which would\n> >fit well in context in places like\n> pRedoRecPtr, It's perfect for me.\n\nAnyway I strongly object to the name 'pRedoRecPtr', which suggests as\nif it is a C-pointer to some variable. (And I believe we use Hungarian\nnotation only if we don't have a better way...) LatestRedoRecPtr\nlooks better to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 09 Dec 2019 12:40:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On 12/8/19 10:25 AM, Mark Dilger wrote:\n> I was\n> still expecting multiple patches, perhaps named along the\n> lines of:\n> \n> � unshadow.RedoRecPtr.patch.1\n> � unshadow.wal_segment_size.patch.1\n> � unshadow.synchronous_commit.patch.1\n> � unshadow.wrconn.patch.1\n> � unshadow.progname.patch.1\n> � unshadow.am_syslogger.patch.1\n> � unshadow.days.patch.1\n> � unshadow.months.patch.1\n> \n> etc.� I'm uncomfortable giving you negative feedback of this\n> sort, since I think you are working hard to improve postgres\n> and I really appreciate it, so later tonight I'll try to come\n> back, split your patch for you as described, add an entry to\n> the commitfest if you haven't already, and mark myself as a\n> reviewer.\n\nTo start off, I've taken just six of the 22 or so variables\nthat you renamed and created patches for them. I'm not\nendorsing these in any way. I chose these mostly based on\nwhich ones showed up first in your patch file, with one\nexception.\n\nI stopped when I got to 'progname' => 'prog_name' as the\nwhole exercise was getting too absurd even for me. That\nclearly looks like one where the structure of the code\nneeds to be reconsidered, rather than just renaming stuff.\n\nI'll create the commitfest entry based on this email once\nthis has been sent.\n\nPatches attached.\n\n-- \nMark Dilger",
"msg_date": "Sun, 8 Dec 2019 20:50:41 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "\n\nOn 12/8/19 8:50 PM, Mark Dilger wrote:\n> \n> \n> On 12/8/19 10:25 AM, Mark Dilger wrote:\n>> I was\n>> still expecting multiple patches, perhaps named along the\n>> lines of:\n>>\n>> �� unshadow.RedoRecPtr.patch.1\n>> �� unshadow.wal_segment_size.patch.1\n>> �� unshadow.synchronous_commit.patch.1\n>> �� unshadow.wrconn.patch.1\n>> �� unshadow.progname.patch.1\n>> �� unshadow.am_syslogger.patch.1\n>> �� unshadow.days.patch.1\n>> �� unshadow.months.patch.1\n>>\n>> etc.� I'm uncomfortable giving you negative feedback of this\n>> sort, since I think you are working hard to improve postgres\n>> and I really appreciate it, so later tonight I'll try to come\n>> back, split your patch for you as described, add an entry to\n>> the commitfest if you haven't already, and mark myself as a\n>> reviewer.\n> \n> To start off, I've taken just six of the 22 or so variables\n> that you renamed and created patches for them.� I'm not\n> endorsing these in any way.� I chose these mostly based on\n> which ones showed up first in your patch file, with one\n> exception.\n> \n> I stopped when I got to 'progname' => 'prog_name' as the\n> whole exercise was getting too absurd even for me.� That\n> clearly looks like one where the structure of the code\n> needs to be reconsidered, rather than just renaming stuff.\n> \n> I'll create the commitfest entry based on this email once\n> this has been sent.\n> \n> Patches attached.\n\nThe commitfest item now exists at\n\n https://commitfest.postgresql.org/26/2371/\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sun, 8 Dec 2019 20:55:19 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Sun, Dec 08, 2019 at 02:14:03PM -0500, Tom Lane wrote:\n> That is, these arguments *used* to be a different LSN pointer, and that\n> commit changed them to be mostly equal to RedoRecPtr, and made what\n> seems like a not very well-advised renaming to go with that.\n\nIndeed. That part was ill-thought.\n\n>> So it might make sense to remove the parameter from this\n>> function, too, and replace it with a flag parameter named\n>> something like \"is_valid\", or perhaps split the function\n>> into two functions, one for valid and one for invalid.\n> \n> Don't think I buy that. The fact that these arguments were until recently\n> different from RedoRecPtr suggests that they might someday be different\n> again, whereupon we'd have to laboriously revert such a parameter redesign.\n> I think I'd just go for names that don't have a hard implication that\n> the parameter values are the same as any particular global variable.\n\nYeah, those APIs may have a slightly different meaning in the future,\nso I agree that it makes the most sense to rename the variables of the\nfunctions from RedoRecPtr to lastRedoPtr to outline the fact that we\nare referring to the redo LSN of the last checkpoint. Attached is a\npatch for that. I'd rather back-patch that to avoid any conflicts\nwhen working on bug fixes for stable branches. Thoughts?\n--\nMichael",
"msg_date": "Mon, 9 Dec 2019 18:03:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "De: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nEnviado: segunda-feira, 9 de dezembro de 2019 03:40\n>The file-scoped variable is needed to be process-persistent in any\n>way. If we inhibit them, the upper-modules need to create the\n>persistent area instead, for example, by calling XLogInitGlobals() or\n>such, which makes things messier. Globality doens't necessarily mean\n>evil and there're reasons for -Wall doesn't warn the case. I believe\n>we, and especially committers are not who should be kept away from\n>knives for the reason that knives generally have a possibility to\n>injure someone.\nWhich harms the reusability of the code anyway.\n\n>I might be too accustomed there, but the functions that define\n>overriding locals don't modify the local variables and only the\n>functions that don't override the globals modifies the glboals. I see\n>no significant confusion here. By the way changes like \"conf_file\" ->\n>\"conffile\" seems really useless as a fix patch.\nWell i was trying to fix everything.\n\n>As Robert said, they are harmless as far as we notice. Actual bugs\n>caused by variable overriding would be welcomed to fix. I don't\n>believe \"lead to better performance and reduction (of code?)\" without\n>an evidence since modern compilers I think are not so stupid. Even if\n>any, performance change in such extent doesn't support the proposal to\n>remove variable overrides that way.\n\nIt's clear to me now that unless \"the thing\" is clearly a bug, don't touch it.\nI love C, so for me it's very hard to resist getting stupid things like:\nfoo ()\n{\nint i, n;\nfor (i-0; i < n; i ++);\n{\n int i;\n for (i=0; i < n; i ++);\n}\n{\n int i;\n for (i=0; i < n; i ++);\n}\nreturn;\n\nI don't know how you can do it.\n\nOf course, there are cases and cases, let's look at the example of multixact.c\ndiff --git a / src / backend / access / transam / multixact.c b / src / backend / access / transam / multixact.c\nindex 7b2448e05b..6364014fb3 100644\n--- a / src / backend / access / transam / multixact.c\n+++ b / src / backend / access / transam / multixact.c\n@@ -1589.10 +1589.10 @@ mXactCachePut (MultiXactId multi, int nmembers, MultiXactMember * members)\n qsort (entry-> members, nmembers, sizeof (MultiXactMember), mxactMemberComparator);\n \n dlist_push_head (& MXactCache, & entry-> node);\n+ pfree (entry); // <- is it really necessary?\n if (MXactCacheMembers ++> = MAX_CACHE_ENTRIES)\n {\n dlist_node * node;\n- mXactCacheEnt * entry;\n \n node = dlist_tail_node (& MXactCache);\n dlist_delete (node);\n\nI still can't decide if it's a bug or not.\n\nIf it is a bug the correct function here is pfree or what is the equivalent function to free memory?\n\n>Anyway I strongly object to the name 'pRedoRecPtr', which suggests as\n>if it is a C-pointer to some variable. (And I believe we use Hungarian\n>notation only if we don't have a better way...) LatestRedoRecPtr\n>looks better to me.\nI don't have enough information to decide if the lastest is the proper name, so I tried to change the nomenclature as little as possible.\n\nI'll submit a patch sample, which depending on the answer, will give me if it's worth it or not, keep working on it.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 9 Dec 2019 11:02:27 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "> On 9 Dec 2019, at 12:02, Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n\n> diff --git a / src / backend / access / transam / multixact.c b / src / backend / access / transam / multixact.c\n> index 7b2448e05b..6364014fb3 100644\n> --- a / src / backend / access / transam / multixact.c\n> +++ b / src / backend / access / transam / multixact.c\n> @@ -1589.10 +1589.10 @@ mXactCachePut (MultiXactId multi, int nmembers, MultiXactMember * members)\n> qsort (entry-> members, nmembers, sizeof (MultiXactMember), mxactMemberComparator);\n> \n> dlist_push_head (& MXactCache, & entry-> node);\n> + pfree (entry); // <- is it really necessary?\n\nPushing an object to a dlist doesn't copy the object, so freeing entry here\nwould cause a dangling pointer on the list unless I'm misreading. Note that\nentry is allocated in a specific context to ensure it has the correct lifespan.\nThe README in backend/utils/mmgr is a good primer on how memory contexts work\nin postgres.\n\nAs a matter of fact, the pfree call in the cache purge if block isn't really\nrequired either since the entire cache will be freed at the end of the\ntransaction.\n\n> if (MXactCacheMembers ++> = MAX_CACHE_ENTRIES)\n> {\n> dlist_node * node;\n> - mXactCacheEnt * entry;\n\nI can agree that reusing the name entry here isn't ideal, as it's so close, but\nremoving it is worse. I'd prefer to rename it purged, or purged_entry or\nsomething along those lines.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 9 Dec 2019 13:02:14 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Ranier Vilela <ranier_gyn@hotmail.com> writes:\n> > This is the first part of the variable shadow fixes.\n> > Basically it consists of renaming the variables in collision with the global ones, with the minimum change in the semantics.\n>\n> I don't think I'm actually on board with the goal here.\n\nI don't know what to do about the RedoRecPtr mess, but surely\nsubscriptioncmds.c's use of synchronous_commit as a char * when it's\nalready exists as a global variable of type int is not good practice.\nWe've been known to do things like reference globals from within macro\ndefinitions, and while static inlining is likely to make that practice\nless common in the future, we've got plenty of existing instances,\nincluding one that uses that exact variable name (SyncRepRequested()).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Dec 2019 09:33:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Dec 8, 2019 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't think I'm actually on board with the goal here.\n\n> I don't know what to do about the RedoRecPtr mess, but surely\n> subscriptioncmds.c's use of synchronous_commit as a char * when it's\n> already exists as a global variable of type int is not good practice.\n\nWell, again, this is a case-by-case question. I tend to agree that\nchanging that usage in subscriptioncmds.c might be a good idea.\nThat doesn't mean I need to be on board with wholesale removal\nof shadowing warnings.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 10:23:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sun, Dec 8, 2019 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I don't think I'm actually on board with the goal here.\n> > I don't know what to do about the RedoRecPtr mess, but surely\n> > subscriptioncmds.c's use of synchronous_commit as a char * when it's\n> > already exists as a global variable of type int is not good practice.\n> Well, again, this is a case-by-case question. I tend to agree that\n> changing that usage in subscriptioncmds.c might be a good idea.\n> That doesn't mean I need to be on board with wholesale removal\n> of shadowing warnings.\n\nI agree that those things are different, but I'm not sure I understand\nthe nuances of your view. I think my view is that if something in our\ncode is shadowing something else in our code, that's probably\nsomething we ought to look at fixing. If you disagree, I'd be curious\nto know why; I suspect that, as in this case, such cases are just\ncreating a risk of confusion without any real benefit. To me, the grey\narea is in conflicts between stuff in our code and stuff in system\nheader files. I'm not sure I'd want to try to have precisely 0\nconflicts with every crazy decision by every OS / libc maintainer out\nthere, and I suspect on that point at least we are in agreement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Dec 2019 10:50:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Dec 9, 2019 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, again, this is a case-by-case question. I tend to agree that\n>> changing that usage in subscriptioncmds.c might be a good idea.\n>> That doesn't mean I need to be on board with wholesale removal\n>> of shadowing warnings.\n\n> I agree that those things are different, but I'm not sure I understand\n> the nuances of your view. I think my view is that if something in our\n> code is shadowing something else in our code, that's probably\n> something we ought to look at fixing. If you disagree, I'd be curious\n> to know why; I suspect that, as in this case, such cases are just\n> creating a risk of confusion without any real benefit.\n\nI think it depends a lot on the particular identifiers in use. You\nmentioned examples like \"i\" and \"lc\", and I'd add other obviously\nnonce variable names like \"oldcxt\". I'm not particularly concerned\nabout shadowing arising from somebody writing a five-line loop using\na local \"i\" inside a much larger loop also using \"i\" --- yeah, in\ntheory there could be an issue, but in practice there isn't. Being\npicky about that just adds difficulty when writing/reviewing a patch\nthat adds such a five-line loop.\n\nYour point about risking macro breakage from shadowing of global\nvariable names is a good one, but again I don't think it holds up\nas an argument that we have to get rid of all shadowing.\n\n> To me, the grey\n> area is in conflicts between stuff in our code and stuff in system\n> header files. I'm not sure I'd want to try to have precisely 0\n> conflicts with every crazy decision by every OS / libc maintainer out\n> there, and I suspect on that point at least we are in agreement.\n\nI believe the relevant C standards (and practice) are that random\nnames exposed by system headers ought to start with some underscores.\nIf there's a conflict there, it's a bug in the header and cause\nfor a bug report to the OS vendor, not us.\n\nNow, if a conflict of that sort exists and is causing a live bug in PG\non a popular OS, then I'd likely be on board with adjusting our code\nto dodge the problem. But not with doing so just to silence a\ncompiler warning.\n\nA final point here is that in practice, we've had way more problems\nwith conflicts against system headers' definitions of functions,\nmacros, and typedefs than global variables, which is unsurprising\nconsidering how few of the latter are actually exported by typical\nC library APIs. So I'm not sure that there is any big problem to\nbe solved there in the first place.\n\nThe only thing I think is really a substantial bug risk here is your\npoint about our own macros referencing our own global variables.\nWe might be better off fixing that in a localized way by establishing\na policy that any such macros should be converted to static inlines.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 11:48:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 11:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think it depends a lot on the particular identifiers in use. You\n> mentioned examples like \"i\" and \"lc\", and I'd add other obviously\n> nonce variable names like \"oldcxt\". I'm not particularly concerned\n> about shadowing arising from somebody writing a five-line loop using\n> a local \"i\" inside a much larger loop also using \"i\" --- yeah, in\n> theory there could be an issue, but in practice there isn't. Being\n> picky about that just adds difficulty when writing/reviewing a patch\n> that adds such a five-line loop.\n\nI think I would take the contrary view here. I think reusing the same\nvariable names in a single function is confusing, and if I noticed it\nwhile reviewing, I would ask for it to be changed. It's not a\nfive-alarm fire, but it's not good, either.\n\n> > To me, the grey\n> > area is in conflicts between stuff in our code and stuff in system\n> > header files. I'm not sure I'd want to try to have precisely 0\n> > conflicts with every crazy decision by every OS / libc maintainer out\n> > there, and I suspect on that point at least we are in agreement.\n>\n> I believe the relevant C standards (and practice) are that random\n> names exposed by system headers ought to start with some underscores.\n> If there's a conflict there, it's a bug in the header and cause\n> for a bug report to the OS vendor, not us.\n\nSure. I mean we'd have to look at individual cases, but in general I agree.\n\n> Now, if a conflict of that sort exists and is causing a live bug in PG\n> on a popular OS, then I'd likely be on board with adjusting our code\n> to dodge the problem. But not with doing so just to silence a\n> compiler warning.\n\nSounds reasonable.\n\n> A final point here is that in practice, we've had way more problems\n> with conflicts against system headers' definitions of functions,\n> macros, and typedefs than global variables, which is unsurprising\n> considering how few of the latter are actually exported by typical\n> C library APIs. So I'm not sure that there is any big problem to\n> be solved there in the first place.\n>\n> The only thing I think is really a substantial bug risk here is your\n> point about our own macros referencing our own global variables.\n> We might be better off fixing that in a localized way by establishing\n> a policy that any such macros should be converted to static inlines.\n\nThat would be a lot of work, but it would probably have some side\nbenefits, like making things more type-safe.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Dec 2019 11:59:23 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "This the second version of the global unshadow patch.\nTaking into consideration made. In case anyone else revises.\n\nregards\nRanier Vilela",
"msg_date": "Mon, 9 Dec 2019 17:26:33 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": ">I'll create the commitfest entry based on this email once\n>this has been sent.\nCan you add the patch attached?\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 9 Dec 2019 17:28:19 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "\n\nOn 12/9/19 9:28 AM, Ranier Vilela wrote:\n>> I'll create the commitfest entry based on this email once\n>> this has been sent.\n> Can you add the patch attached?\n\nThat showed up in the commitfest entry automatically when you\nreplied to this thread with the attachment.\n\nYou might consider signing up so you can log into the commitfest\napp. https://www.postgresql.org/account/signup/\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Mon, 9 Dec 2019 09:32:25 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-09 11:59:23 -0500, Robert Haas wrote:\n> On Mon, Dec 9, 2019 at 11:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think it depends a lot on the particular identifiers in use. You\n> > mentioned examples like \"i\" and \"lc\", and I'd add other obviously\n> > nonce variable names like \"oldcxt\". I'm not particularly concerned\n> > about shadowing arising from somebody writing a five-line loop using\n> > a local \"i\" inside a much larger loop also using \"i\" --- yeah, in\n> > theory there could be an issue, but in practice there isn't. Being\n> > picky about that just adds difficulty when writing/reviewing a patch\n> > that adds such a five-line loop.\n> \n> I think I would take the contrary view here. I think reusing the same\n> variable names in a single function is confusing, and if I noticed it\n> while reviewing, I would ask for it to be changed. It's not a\n> five-alarm fire, but it's not good, either.\n\n+1. For me it leaves mildly bad taste seing code like that.\n\n\n> > > To me, the grey\n> > > area is in conflicts between stuff in our code and stuff in system\n> > > header files. I'm not sure I'd want to try to have precisely 0\n> > > conflicts with every crazy decision by every OS / libc maintainer out\n> > > there, and I suspect on that point at least we are in agreement.\n> >\n> > I believe the relevant C standards (and practice) are that random\n> > names exposed by system headers ought to start with some underscores.\n> > If there's a conflict there, it's a bug in the header and cause\n> > for a bug report to the OS vendor, not us.\n> \n> Sure. I mean we'd have to look at individual cases, but in general I agree.\n\nWe do have a few files where we have names starting with underscores\nourselves, imo not a great idea for most of them.\n\n\n> > Now, if a conflict of that sort exists and is causing a live bug in PG\n> > on a popular OS, then I'd likely be on board with adjusting our code\n> > to dodge the problem. But not with doing so just to silence a\n> > compiler warning.\n> \n> Sounds reasonable.\n\nFWIW, I've had bugs in code submitted to PG (both by myself and me\nmerging other people's work IIRC) that were related to such naming\nconflicts.\n\n\n> > The only thing I think is really a substantial bug risk here is your\n> > point about our own macros referencing our own global variables.\n> > We might be better off fixing that in a localized way by establishing\n> > a policy that any such macros should be converted to static inlines.\n> \n> That would be a lot of work, but it would probably have some side\n> benefits, like making things more type-safe.\n\nIt's also not always possible in C99, as we have plenty macros with\nessentially dynamic types. And there's no typeof() in standard C,\nunfortunately (C11's _Generic can help, but isn't great either).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Dec 2019 13:32:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-12-09 11:59:23 -0500, Robert Haas wrote:\n>> On Mon, Dec 9, 2019 at 11:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> The only thing I think is really a substantial bug risk here is your\n>>> point about our own macros referencing our own global variables.\n>>> We might be better off fixing that in a localized way by establishing\n>>> a policy that any such macros should be converted to static inlines.\n\n>> That would be a lot of work, but it would probably have some side\n>> benefits, like making things more type-safe.\n\n> It's also not always possible in C99, as we have plenty macros with\n> essentially dynamic types. And there's no typeof() in standard C,\n> unfortunately (C11's _Generic can help, but isn't great either).\n\nHow much overlap is there between macros referencing global variables\nand macros with indeterminate types? Not much I bet. I'd mostly\nbe worried about things like CHECK_FOR_INTERRUPTS().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 16:38:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On 2019-Dec-09, Kyotaro Horiguchi wrote:\n\n\n> > >BTW, if we do go forward with changing the RedoRecPtr uses, I'm not\n> > >in love with \"XRedoRecPtr\" as a replacement parameter name; it conveys\n> > >nothing much, and the \"X\" prefix is already overused in that area of\n> > >the code. Perhaps \"pRedoRecPtr\" would be a better choice? Or maybe\n> > >make the local variables be all-lower-case \"redorecptr\", which would\n> > >fit well in context in places like\n> > pRedoRecPtr, It's perfect for me.\n> \n> Anyway I strongly object to the name 'pRedoRecPtr', which suggests as\n> if it is a C-pointer to some variable. (And I believe we use Hungarian\n> notation only if we don't have a better way...) LatestRedoRecPtr\n> looks better to me.\n\nWe have a not-consistently-used convention that names in CamelCase are\nused for global variables. Naming a function parameter in that style\nseems pointlessly confusing. I would rather use redorecptr as Tom\nsuggested, which fits with the style used for the other arguments of\nthat function. BTW prepending an X or a p looks like minimum effort...\nI'd stay away from that.\n\nIt's probably a lost cause to enforce such a style with ironclad\nconsistency, but I suggest we make at least a token effort at it, and\nkeep our source as least confusing as possible.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 9 Dec 2019 18:49:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On 2019-Dec-09, Ranier Vilela wrote:\n\n> --- a/src/backend/access/transam/xlogreader.c\n> +++ b/src/backend/access/transam/xlogreader.c\n> @@ -70,7 +70,7 @@ report_invalid_record(XLogReaderState *state, const char *fmt,...)\n> * Returns NULL if the xlogreader couldn't be allocated.\n> */\n> XLogReaderState *\n> -XLogReaderAllocate(int wal_segment_size, const char *waldir,\n> +XLogReaderAllocate(int wallog_segment_size, const char *waldir,\n> \t\t\t\t XLogPageReadCB pagereadfunc, void *private_data)\n> {\n> \tXLogReaderState *state;\n\nI find this choice a bit ugly and even more confusing than the original.\nI'd change this to be just \"segsize\".\n\nI would tend to name the GUC variable as if it were a global in the\nsense that I proposed in my previous response (ie. WalSegmentSize), but\nthat creates a bit of a problem when one greps the source looking for\nreference to the GUCs. Some GUCs do have CamelCase names and others\ndon't; I've grown fond of the current style of using the same name for\nthe variable as for the GUC itself, for grepping reasons. So I'm not\ngoing to propose to do that. But let's not make a function parameter\nhave a name that vaguely suggests that it itself is a GUC.\n\n> @@ -430,14 +430,14 @@ CreateSubscription(CreateSubscriptionStmt *stmt, bool isTopLevel)\n> \t{\n> \t\tXLogRecPtr\tlsn;\n> \t\tchar\t *err;\n> -\t\tWalReceiverConn *wrconn;\n> +\t\tWalReceiverConn *walrconn;\n> \t\tList\t *tables;\n> \t\tListCell *lc;\n> \t\tchar\t\ttable_state;\n> \n> \t\t/* Try to connect to the publisher. */\n> -\t\twrconn = walrcv_connect(conninfo, true, stmt->subname, &err);\n> -\t\tif (!wrconn)\n> +\t\twalrconn = walrcv_connect(conninfo, true, stmt->subname, &err);\n> +\t\tif (!walrconn)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errmsg(\"could not connect to the publisher: %s\", err)));\n> \n\nHere I propose to rename the global instead (to WalReceiverConn maybe).\nThere's nothing about the name \"wrconn\" that suggests it's a global\nvariable. In any other place where the object is used as a local\nvariable, I'd just use \"conn\". Trying to be clever and adding a letter\nhere or a letter there makes it *more* likely that you'll reference the\nwrong one in some function.\n\n> index a9edbfd4a4..1f5921b6e7 100644\n> --- a/src/backend/main/main.c\n> +++ b/src/backend/main/main.c\n> @@ -225,7 +225,7 @@ main(int argc, char *argv[])\n> * without help. Avoid adding more here, if you can.\n> */\n> static void\n> -startup_hacks(const char *progname)\n> +startup_hacks(const char *prog_name)\n> {\n> \t/*\n> \t * Windows-specific execution environment hacking.\n\nI don't agree with this change very much. I think \"progname\" in\nparticular is a bit of a debacle right now but I don't think this is the\nbest fix. I'd leave this alone.\n\n> diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c\n> index 8b2a2be1c0..e12f41cea4 100644\n> --- a/src/backend/replication/walsender.c\n> +++ b/src/backend/replication/walsender.c\n> @@ -3223,7 +3223,7 @@ pg_stat_get_wal_senders(PG_FUNCTION_ARGS)\n> \tfor (i = 0; i < max_wal_senders; i++)\n> \t{\n> \t\tWalSnd\t *walsnd = &WalSndCtl->walsnds[i];\n> -\t\tXLogRecPtr\tsentPtr;\n> +\t\tXLogRecPtr\twalsentPtr;\n> \t\tXLogRecPtr\twrite;\n> \t\tXLogRecPtr\tflush;\n> \t\tXLogRecPtr\tapply;\n\nAs before: let's rename the file-level static instead. \"sentPtr\" is not\na good name.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 9 Dec 2019 19:06:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> We have a not-consistently-used convention that names in CamelCase are\n> used for global variables. Naming a function parameter in that style\n> seems pointlessly confusing. I would rather use redorecptr as Tom\n> suggested, which fits with the style used for the other arguments of\n> that function. BTW prepending an X or a p looks like minimum effort...\n> I'd stay away from that.\n\nActually, for the particular case of RemoveXlogFile(s), I wonder if it\nshouldn't be \"startptr\" to go with the other argument \"endptr\". This line\nof thinking might not lead to nicer names in other functions, of course.\nBut we shouldn't assume that a one-size-fits-all solution is going to\nimprove legibility, and in the end, legibility is what this should be\nabout.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 17:11:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "De: Alvaro Herrera <alvherre@2ndquadrant.com>\nEnviado: segunda-feira, 9 de dezembro de 2019 22:06\n\n>I find this choice a bit ugly and even more confusing than the original.\n>I'd change this to be just \"segsize\".\nOk.\n\n>I would tend to name the GUC variable as if it were a global in the\n>sense that I proposed in my previous response (ie. WalSegmentSize), but\n>that creates a bit of a problem when one greps the source looking for\n>reference to the GUCs. Some GUCs do have CamelCase names and others\n>don't; I've grown fond of the current style of using the same name for\n>the variable as for the GUC itself, for grepping reasons. So I'm not\n>going to propose to do that. But let's not make a function parameter\n>have a name that vaguely suggests that it itself is a GUC.\nI understand the ease of grepping.\nBut ideally, having global names that by convention would make it very clear that they are global, something like:\npg_conn or gconn or guc_synch_commit\nThe prefix does not matter, as long as once all the new variables and the ones that might have to be changed were chosen, they adopted the agreed nomenclature.\nThat way, when looking for global names, it would be easy.\n\n>Here I propose to rename the global instead (to WalReceiverConn maybe).\n>There's nothing about the name \"wrconn\" that suggests it's a global\n>variable. In any other place where the object is used as a local\n>variable, I'd just use \"conn\". Trying to be clever and adding a letter\n>here or a letter there makes it *more* likely that you'll reference the\n>wrong one in some function.\nAgain, it could be that name, WalReceiverConn, but nothing in it suggests it is a global one.\nFor a project that makes extensive use of globals, it would help to have a nomenclature defined at least for the prefix:\npg_WalReceiverConn or gWalReceiverConn and if it is a guc, guc_WalReceiverConn?\n\n>I don't agree with this change very much. I think \"progname\" in\n>particular is a bit of a debacle right now but I don't think this is the\n>best fix. I'd leave this alone.\nOk. In such cases, it doesn't hurt today. But for future reasons, it would be better to fix everything, imo.\n\n>As before: let's rename the file-level static instead. \"sentPtr\" is not\n>a good name.\ngsent_Ptr or pg_sentPtr?\n\nregards,\nRanier Vilela\n\n",
"msg_date": "Tue, 10 Dec 2019 11:20:44 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "New version the global patch, with the considerations.\nUnfortunately WalReceiverConn cannot be used because it is currently the typedef name for the structure.\nI switched to WalReceiverConnection, it was long but it looks good.\nRedoRecPtr proper name has no consensus yet, so it was still lastRedoRecPtr.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 10 Dec 2019 13:20:57 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 5:21 AM Ranier Vilela <ranier_gyn@hotmail.com>\nwrote:\n\n> New version the global patch, with the considerations.\n> Unfortunately WalReceiverConn cannot be used because it is currently the\n> typedef name for the structure.\n> I switched to WalReceiverConnection, it was long but it looks good.\n> RedoRecPtr proper name has no consensus yet, so it was still\n> lastRedoRecPtr.\n>\n>\nFor someone that expounds consistency - this patch is the furthest thing\nfrom it.\n\nIn some places names are randomly changed to have an underscore\n(authmethodlocal to authmethod_local with the obvious inconsistency as\nwell) - in some places names are changed to remove underscores (stop_t to\nstopt). Some places random letters are added (checkPoint to xcheckPoint)\nsome places perfectly good names are truncated (conf_file to file).\n\nRandom places remove perfectly good prefixes and replace with single\nletters (numTables to nTables)\n\nRandom places switch from lower case names to upper case names (sentPtr to\nWalSentPtr) most places leave lower case names (days to ndays).\n\nPlease at least be consistent within the patch itself.....\n\nJohn W Higgins\n\nOn Tue, Dec 10, 2019 at 5:21 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:New version the global patch, with the considerations.\nUnfortunately WalReceiverConn cannot be used because it is currently the typedef name for the structure.\nI switched to WalReceiverConnection, it was long but it looks good.\nRedoRecPtr proper name has no consensus yet, so it was still lastRedoRecPtr.\nFor someone that expounds consistency - this patch is the furthest thing from it.In some places names are randomly changed to have an underscore (authmethodlocal to authmethod_local with the obvious inconsistency as well) - in some places names are changed to remove underscores (stop_t to stopt). Some places random letters are added (checkPoint to xcheckPoint) some places perfectly good names are truncated (conf_file to file).Random places remove perfectly good prefixes and replace with single letters (numTables to nTables)Random places switch from lower case names to upper case names (sentPtr to WalSentPtr) most places leave lower case names (days to ndays).Please at least be consistent within the patch itself.....John W Higgins",
"msg_date": "Tue, 10 Dec 2019 07:58:41 -0800",
"msg_from": "John W Higgins <wishdev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "De: John W Higgins <wishdev@gmail.com>\nEnviado: terça-feira, 10 de dezembro de 2019 15:58\n\n>For someone that expounds consistency - this patch is the furthest thing from it.\n>In some places names are randomly changed to have an underscore >(authmethodlocal to authmethod_local with the obvious inconsistency as well) - >in some places names are changed to remove underscores (stop_t to stopt). >Some places random letters are added (checkPoint to xcheckPoint) some places >perfectly good names are truncated (conf_file to file).\nThe first purpose of the patch was to remove collisions from shadow global variable names.\nThe second was not to change the semantics of variable names, hence the use of x or putting or remove underscore.\nBut I agree with you that the choice of names can improve.\nxcheckpoint sounds ugly.\nstopt sounds ugly too.\n\n>Random places remove perfectly good prefixes and replace with single letters >(numTables to nTables)\nnumTables already a global variable name.\nnTables It seems very reasonable to me to contain the number of tables.\n\n>Random places switch from lower case names to upper case names (sentPtr to >WalSentPtr) most places leave lower case names (days to ndays).\nagain senPtr already a global variable name.\nWell, I tried to follow the local source style a little, since the project does not have a default for global names.\nThere we have some WalSntCtl por example.\n\nndays sounds very good to me for number of days.\n\n>Please at least be consistent within the patch itself.....\nI'm trying.\n\nregards,\nRanier Vilela\n\n",
"msg_date": "Tue, 10 Dec 2019 17:13:47 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Greetings,\n\n* Ranier Vilela (ranier_gyn@hotmail.com) wrote:\n> >For someone that expounds consistency - this patch is the furthest thing from it.\n> >In some places names are randomly changed to have an underscore >(authmethodlocal to authmethod_local with the obvious inconsistency as well) - >in some places names are changed to remove underscores (stop_t to stopt). >Some places random letters are added (checkPoint to xcheckPoint) some places >perfectly good names are truncated (conf_file to file).\n> The first purpose of the patch was to remove collisions from shadow global variable names.\n\nThere's multiple ways to get there though and I think what you're seeing\nis that the \"just change it to something else\" answer isn't necessairly\ngoing to be viewed as an improvement (or, at least, not enough of an\nimprovement to accept the cost of the change).\n\n> The second was not to change the semantics of variable names, hence the use of x or putting or remove underscore.\n\nWhy not change the variables? Changes that also improve the code itself\nalong with eliminating the shadowing of the global variable are going to\nbe a lot easier to be accepted.\n\n> >Random places remove perfectly good prefixes and replace with single letters >(numTables to nTables)\n> numTables already a global variable name.\n\nSure, but have you looked at how it's used? Instead of just renaming\nthe numTables variables in the functions that accept it- could those\nvariables just be removed instead of changing their name to make it look\nlike they're something different when they aren't actually different?\n\nI've only spent a bit of time looking at it, but it sure looks like the\nvariables could just be removed, and doing so doesn't break the\nregression tests, which supports the idea that maybe there's a better\nway to deal with those particular variables rather than renaming them.\n\nAnother approach to consider might be to move some global variables into\nstructures that are then global with better names to indicate that's\nwhat they are.\n\nIn short, a hack-and-slash patch that doesn't really spend much time\nconsidering the changes beyond \"let's just change these to be different\nto avoid shadowing globals\" isn't really a good way to go about\naddressing these cases and has a good chance of making things more\nconfusing, not less.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 10 Dec 2019 12:52:57 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "New version the global patch unshadow.\n* names more consistent and readable.\n* without big changes.\n* goal,, unshadow all global variables, only, even the simplest.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 11 Dec 2019 12:52:38 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Greetings,\n\n* Ranier Vilela (ranier_gyn@hotmail.com) wrote:\n> New version the global patch unshadow.\n> * names more consistent and readable.\n> * without big changes.\n> * goal,, unshadow all global variables, only, even the simplest.\n\nThis didn't address any of the comments that I raised elsewhere on this\nthread... I certainly don't like the changes being proposed here for\npg_dump.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Dec 2019 10:19:08 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Mon, Dec 09, 2019 at 05:11:10PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> We have a not-consistently-used convention that names in CamelCase are\n>> used for global variables. Naming a function parameter in that style\n>> seems pointlessly confusing. I would rather use redorecptr as Tom\n>> suggested, which fits with the style used for the other arguments of\n>> that function. BTW prepending an X or a p looks like minimum effort...\n>> I'd stay away from that.\n> \n> Actually, for the particular case of RemoveXlogFile(s), I wonder if it\n> shouldn't be \"startptr\" to go with the other argument \"endptr\". This line\n> of thinking might not lead to nicer names in other functions, of course.\n> But we shouldn't assume that a one-size-fits-all solution is going to\n> improve legibility, and in the end, legibility is what this should be\n> about.\n\nHmm. In the case of this logic, we are referring to the current end\nof WAL with endptr, and what you are calling the startptr is really\nthe redo LSN of the last checkpoint in all the routines which are now\nconfused with RedoRecPtr: RemoveOldXlogFile, RemoveXlogFile and\nXLOGfileslop. Using lower-case for all the characters of the variable\nname sounds like a good improvement as well, so taking a combination\nof all that I would just use \"lastredoptr\" in those three code paths\n(note that we used to have PriorRedoPtr before). As that's a\nconfusion I introduced with d9fadbf, I would like to fix that and\nbackpatch this change down to 11. (Ranier gets the authorship\nper se as that's extracted from a larger patch).\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 09:36:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 09:36:13AM +0900, Michael Paquier wrote:\n> As that's a confusion I introduced with d9fadbf, I would like to fix\n> that and backpatch this change down to 11. (Ranier gets the\n> authorship per se as that's extracted from a larger patch).\n\nCommitted that part.\n\nI got to look at the rest of the stuff discussed, and I am not sure\nthat any of the changes are actually things which improve\nreadability.\n\nLet's take one example. The changes in pg_dump/ like\n/progname/prog_name/ have just been done in haste, without actual\nthoughts about how the problem ought to be fixed. And in this case,\nsomething which could be more adapted is to remove the argument from\nusage() because progname is a global variable, initialized from the\nbeginning in pg_restore.\n--\nMichael",
"msg_date": "Wed, 18 Dec 2019 10:18:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On 2019-Dec-18, Michael Paquier wrote:\n\n> Let's take one example. The changes in pg_dump/ like\n> /progname/prog_name/ have just been done in haste, without actual\n> thoughts about how the problem ought to be fixed. And in this case,\n> something which could be more adapted is to remove the argument from\n> usage() because progname is a global variable, initialized from the\n> beginning in pg_restore.\n\nWe discussed progname as a global/local before -- IIRC in the thread\nthat introduced the frontend logging API -- and while I think the whole\nissue could stand some improvement, we shouldn't let it be driven by\nminor changes; that'll only make it more confusing. IMO if we want it\nimproved, a larger change (involving the bunch of frontend programs) is\nwhat to look for. Maybe what you suggest is an improvement, though\n(certainly the \"prog_name\" patch wasn't).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Dec 2019 06:55:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On 2019-12-18 10:55, Alvaro Herrera wrote:\n> On 2019-Dec-18, Michael Paquier wrote:\n> \n>> Let's take one example. The changes in pg_dump/ like\n>> /progname/prog_name/ have just been done in haste, without actual\n>> thoughts about how the problem ought to be fixed. And in this case,\n>> something which could be more adapted is to remove the argument from\n>> usage() because progname is a global variable, initialized from the\n>> beginning in pg_restore.\n> \n> We discussed progname as a global/local before -- IIRC in the thread\n> that introduced the frontend logging API -- and while I think the whole\n> issue could stand some improvement, we shouldn't let it be driven by\n> minor changes; that'll only make it more confusing. IMO if we want it\n> improved, a larger change (involving the bunch of frontend programs) is\n> what to look for. Maybe what you suggest is an improvement, though\n> (certainly the \"prog_name\" patch wasn't).\n\nThis thread is still in the commit fest, but it's apparently gone as far \nas it will, so I've set it to \"Committed\" for lack of a \"partial\" status.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Feb 2020 10:43:50 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 10:43:50AM +0100, Peter Eisentraut wrote:\n> This thread is still in the commit fest, but it's apparently gone as far as\n> it will, so I've set it to \"Committed\" for lack of a \"partial\" status.\n\nThanks, that sounds right to me. I was just looking at the latest\npatch presented after seeing your reply, and I did not spot\nimmediately any issues standing out compared to the others.\n--\nMichael",
"msg_date": "Thu, 27 Feb 2020 21:55:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nI made a patch fixing build and install problems under MSYS2, including\nllvmjit.\n\nI have tested this in my environment and it works, of course need more\nextensive testing.\nAttached is a patch that fixes it. Tag REL_12_1.\n\n-- \nBest regards.\nGuram Duka.",
"msg_date": "Wed, 4 Dec 2019 10:46:39 +0300",
"msg_from": "Guram Duka <guram.duka@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix PostgreSQL 12.1 server build and install problems under\n MSYS2"
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 10:46:39AM +0300, Guram Duka wrote:\n> I made a patch fixing build and install problems under MSYS2, including\n> llvmjit.\n> \n> I have tested this in my environment and it works, of course need more\n> extensive testing.\n> Attached is a patch that fixes it. Tag REL_12_1.\n\nDo you have the same problems if you compile the code from the latest\nbranch of the master branch?\n\nCould you register this patch to the upcoming commit fest? Here is a\nlink to it:\nhttps://commitfest.postgresql.org/26/\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 17:05:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix PostgreSQL 12.1 server build and install problems\n under MSYS2"
},
{
"msg_contents": "Master branch got error in configure stage and then compiling like 12.1\nbranch.\n\nchecking how to link an embedded Python application... configure: error:\n> could not find shared library for Python\n> You might have to rebuild your Python installation. Refer to the\n> documentation for details. Use --without-python to disable building\n> PL/Python.\n>\n> I registered the patch.\n>\n\nср, 4 дек. 2019 г. в 11:05, Michael Paquier <michael@paquier.xyz>:\n\n> On Wed, Dec 04, 2019 at 10:46:39AM +0300, Guram Duka wrote:\n> > I made a patch fixing build and install problems under MSYS2, including\n> > llvmjit.\n> >\n> > I have tested this in my environment and it works, of course need more\n> > extensive testing.\n> > Attached is a patch that fixes it. Tag REL_12_1.\n>\n> Do you have the same problems if you compile the code from the latest\n> branch of the master branch?\n>\n> Could you register this patch to the upcoming commit fest? Here is a\n> link to it:\n> https://commitfest.postgresql.org/26/\n>\n> Thanks,\n> --\n> Michael\n>\n\n\n-- \nBest regards.\nGuram Duka.\n\nMaster branch got error in configure stage and then compiling like 12.1 branch.checking how to link an embedded Python application... configure: error: could not find shared library for PythonYou might have to rebuild your Python installation. Refer to thedocumentation for details. Use --without-python to disable buildingPL/Python.I registered the patch.ср, 4 дек. 2019 г. в 11:05, Michael Paquier <michael@paquier.xyz>:On Wed, Dec 04, 2019 at 10:46:39AM +0300, Guram Duka wrote:\n> I made a patch fixing build and install problems under MSYS2, including\n> llvmjit.\n> \n> I have tested this in my environment and it works, of course need more\n> extensive testing.\n> Attached is a patch that fixes it. Tag REL_12_1.\n\nDo you have the same problems if you compile the code from the latest\nbranch of the master branch?\n\nCould you register this patch to the upcoming commit fest? Here is a\nlink to it:\nhttps://commitfest.postgresql.org/26/\n\nThanks,\n--\nMichael\n-- Best regards.Guram Duka.",
"msg_date": "Wed, 4 Dec 2019 13:32:14 +0300",
"msg_from": "Guram Duka <guram.duka@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix PostgreSQL 12.1 server build and install problems\n under MSYS2"
},
{
"msg_contents": "On 2019-12-04 11:32, Guram Duka wrote:\n> Master branch got error in configure stage and then compiling like 12.1 \n> branch.\n> \n> checking how to link an embedded Python application... configure:\n> error: could not find shared library for Python\n> You might have to rebuild your Python installation. Refer to the\n> documentation for details. Use --without-python to disable building\n> PL/Python.\n> \n> I registered the patch.\n\nAs explained to you in the previous patch, you need to explain each \nchange in detail and show the problems you are facing. Other people \nhave used this platform before and it works for them, so you need to \nexplain what's different for you, especially considering the extent of \nthe changes you are proposing.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 11:56:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix PostgreSQL 12.1 server build and install problems\n under MSYS2"
},
{
"msg_contents": "On 2019-12-31 11:56, Peter Eisentraut wrote:\n> On 2019-12-04 11:32, Guram Duka wrote:\n>> Master branch got error in configure stage and then compiling like 12.1\n>> branch.\n>>\n>> checking how to link an embedded Python application... configure:\n>> error: could not find shared library for Python\n>> You might have to rebuild your Python installation. Refer to the\n>> documentation for details. Use --without-python to disable building\n>> PL/Python.\n>>\n>> I registered the patch.\n> \n> As explained to you in the previous patch, you need to explain each\n> change in detail and show the problems you are facing. Other people\n> have used this platform before and it works for them, so you need to\n> explain what's different for you, especially considering the extent of\n> the changes you are proposing.\n\nI'm setting this patch as rejected in the commit fest.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 Jan 2020 15:35:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix PostgreSQL 12.1 server build and install problems\n under MSYS2"
}
] |
[
{
"msg_contents": "Hi,\n\nProbably a simple question, but I don't see a simple answer so far. In\none extension I want to convert uint64 into a numeric to put it\neventually into a jsonb object. As far as I see in numeric.c there are\nfunctions only for signed int64. Is there a way to achive this with\nuint64 (without duplicating significant part of numeric implementation\nin the extension)?\n\n\n",
"msg_date": "Wed, 4 Dec 2019 11:26:59 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unsigned 64 bit integer to numeric"
},
{
"msg_contents": "Hi,\nI don't think so, but there's an (unmaintained?) uint extension at\nhttps://github.com/petere/pguint.git\n\n\nOn Wed, Dec 4, 2019 at 11:24 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> Hi,\n>\n> Probably a simple question, but I don't see a simple answer so far. In\n> one extension I want to convert uint64 into a numeric to put it\n> eventually into a jsonb object. As far as I see in numeric.c there are\n> functions only for signed int64. Is there a way to achive this with\n> uint64 (without duplicating significant part of numeric implementation\n> in the extension)?\n>\n>\n>\n\nHi,I don't think so, but there's an (unmaintained?) uint extension athttps://github.com/petere/pguint.gitOn Wed, Dec 4, 2019 at 11:24 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:Hi,\n\nProbably a simple question, but I don't see a simple answer so far. In\none extension I want to convert uint64 into a numeric to put it\neventually into a jsonb object. As far as I see in numeric.c there are\nfunctions only for signed int64. Is there a way to achive this with\nuint64 (without duplicating significant part of numeric implementation\nin the extension)?",
"msg_date": "Wed, 4 Dec 2019 12:42:30 +0100",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unsigned 64 bit integer to numeric"
},
{
"msg_contents": ">>>>> \"Dmitry\" == Dmitry Dolgov <9erthalion6@gmail.com> writes:\n\n Dmitry> Hi,\n\n Dmitry> Probably a simple question, but I don't see a simple answer so\n Dmitry> far. In one extension I want to convert uint64 into a numeric\n Dmitry> to put it eventually into a jsonb object. As far as I see in\n Dmitry> numeric.c there are functions only for signed int64. Is there a\n Dmitry> way to achive this with uint64 (without duplicating significant\n Dmitry> part of numeric implementation in the extension)?\n\nSure. Flip the top bit; convert the value as if signed; then subtract\n-(2^63) from the result. (Easier to subtract -(2^63) than to add 2^63,\nsince the former can itself be represented in a signed int64 for easy\nconversion to numeric.)\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 04 Dec 2019 11:49:20 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Unsigned 64 bit integer to numeric"
},
{
"msg_contents": "> On Wed, Dec 04, 2019 at 11:49:20AM +0000, Andrew Gierth wrote:\n>\n> >>>>> \"Dmitry\" == Dmitry Dolgov <9erthalion6@gmail.com> writes:\n>\n> Dmitry> Hi,\n>\n> Dmitry> Probably a simple question, but I don't see a simple answer so\n> Dmitry> far. In one extension I want to convert uint64 into a numeric\n> Dmitry> to put it eventually into a jsonb object. As far as I see in\n> Dmitry> numeric.c there are functions only for signed int64. Is there a\n> Dmitry> way to achive this with uint64 (without duplicating significant\n> Dmitry> part of numeric implementation in the extension)?\n>\n> Sure. Flip the top bit; convert the value as if signed; then subtract\n> -(2^63) from the result. (Easier to subtract -(2^63) than to add 2^63,\n> since the former can itself be represented in a signed int64 for easy\n> conversion to numeric.)\n\nIndeed, looks like this does the trick, thank you!\n\n\n",
"msg_date": "Wed, 4 Dec 2019 20:25:37 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unsigned 64 bit integer to numeric"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16147\nLogged by: Bill Tihen\nEmail address: btihen@gmail.com\nPostgreSQL version: 12.1\nOperating system: MacOS 10.15.1\nDescription: \n\nThe following command crashes with any database I've tried (both large and\nsmall) DBs:\r\n`pg_restore -U wti0405 -d stage3 -h localhost --jobs=8 -Fc\ndatabase_12_04-01-00.bak -x`\r\n\r\nI get the error others have associated with a locking problem in earlier\nversions:\r\n`pg_restore: error: a worker process died unexpectedly`\r\n\r\nInteresting both of the following commands work!\r\n`pg_restore -U wti0405 -d stage3 --jobs=8 -Fc database_12_04-01-00.bak -x`\r\n&\r\n`pg_restore -U wti0405 -d stage3 -h localhost -Fc database_12_04-01-00.bak\n-x`\r\n\r\nSo it appears this is only a problem when using parallelization with a TCP\nand not a local socket.",
"msg_date": "Wed, 04 Dec 2019 11:46:35 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h localhost\n --jobs=2 crashes"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> The following bug has been logged on the website:\n> Bug reference: 16147\n> Logged by: Bill Tihen\n> Email address: btihen@gmail.com\n> PostgreSQL version: 12.1\n> Operating system: MacOS 10.15.1\n> Description: \n\n> The following command crashes with any database I've tried (both large and\n> small) DBs:\n> `pg_restore -U wti0405 -d stage3 -h localhost --jobs=8 -Fc\n> database_12_04-01-00.bak -x`\n\nI failed to reproduce this on my own 10.15.1 laptop, using manual\nbuilds of either HEAD or the v12 branch. Plausible reasons for\nthe difference in results might include:\n\n* There's something different about the homebrew build (could we\nsee the output of pg_config?)\n\n* There's something unusual about your configuration (one thought\nthat comes to mind: do you have SSL turned on for localhost\nconnections?)\n\n* There's something about the data in this specific database\n(your report that it happens for multiple databases puts a crimp\nin this idea, though maybe they all share a common feature)\n\nAnyway, we need more info to investigate. You might try looking\ninto the server log to see what the failure looks like from that\nside --- is there a query error, or just the worker disconnecting\nunexpectedly?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 09:03:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h\n localhost --jobs=2 crashes"
},
{
"msg_contents": "I encounter similar issue with the following differences to OP\n\n- Operating system: MacOS 10.14.6\n- PostgreSQL (both server & pg_restore) installed via PSQL.app\nhttps://github.com/PostgresApp/PostgresApp/releases/tag/v2.3.2\n<https://github.com/PostgresApp/PostgresApp/releases/tag/v2.3.2> \n\npg_restore's path is\n/Applications/Postgres.app/Contents/Versions/latest/bin/pg_restore\n\nSame as OP\nIf the number of jobs is 1\nor setting host to unix socket\n\nThe error \"pg_restore: error: a worker process died unexpectedly\" won't be\nraised\nHowever this error doesn't happen on Ubuntu 18.04 version of psql\n(\nhttps://www.ubuntuupdates.org/package/postgresql/bionic-pgdg/main/base/postgresql-12\n<https://www.ubuntuupdates.org/package/postgresql/bionic-pgdg/main/base/postgresql-12> \n)\nSo it might or might not be build issue for MacOS version \nThough having the same issue on both distribution channels (homebrew &\npsql.app) seem rare\n\nAnyway reported also to psql.app too\nhttps://github.com/PostgresApp/PostgresApp/issues/538\n<https://github.com/PostgresApp/PostgresApp/issues/538> \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n",
"msg_date": "Mon, 9 Dec 2019 01:57:34 -0700 (MST)",
"msg_from": "PikachuEXE <pikachuexe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h\n localhost --jobs=2 crashes"
},
{
"msg_contents": "Hello,\nIt doesn't work for me also even on PG version 12.2\n\nOS: Ubuntu 18.04.4 LTS\npostgresql-12 12.2-1.pgdg18.04+1 \namd64\npg_dump (PostgreSQL) 12.2 (Ubuntu 12.2-1.pgdg18.04+1)\npg_restore (PostgreSQL) 12.2 (Ubuntu 12.2-1.pgdg18.04+1)\n\npg_restore: error: could not find block ID 11550 in archive -- possibly due\nto out-of-order restore request, which cannot be handled due to lack of data\noffsets in archive\npg_restore: error: could not find block ID 11546 in archive -- possibly due\nto out-of-order restore request, which cannot be handled due to lack of data\noffsets in archive\npg_restore: error: a worker process died unexpectedly\n\nHowever, it works with --jobs=1 \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n",
"msg_date": "Tue, 18 Feb 2020 00:23:46 -0700 (MST)",
"msg_from": "sailor <pavlok@enapps.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h\n localhost --jobs=2 crashes"
},
{
"msg_contents": "Hi,\n\nI can reproduce this pg_restore crash issue (pg_dump crash too when \nrunning with multiple jobs) on MacOS 10.14 Mojave and MacOS 10.15 \nCatalina using following steps.\n\n1. build pg_resotre from 12.2 with \"--with-gssapi\" enabled, or use the \nrelease from https://www.postgresql.org/download/macosx/\n\n2. start pg server and generate some load,\n pgbench -i -p 5432 -d postgres -s 10\"\n\n3. backup database,\n pg_dump -h localhost -Fc --no-acl --no-owner postgres > /tmp/128m\n\n4. drop the tables,\n psql -d postgres -c \"drop table pgbench_accounts; drop table \npgbench_branches; drop table pgbench_history; drop table pgbench_tellers;\"\n\n5. restore database,\n pg_restore -d postgres -h localhost -Fc /tmp/128m --jobs=2\n Password:\n pg_restore: error: a worker process died unexpectedly\n\n6. check tables, all display size 0 bytes.\n postgres=# \\d+\n List of relations\n Schema | Name | Type | Owner | Size | Description\n--------+------------------+-------+----------+---------+-------------\n public | pgbench_accounts | table | postgres | 0 bytes |\n public | pgbench_branches | table | postgres | 0 bytes |\n public | pgbench_history | table | postgres | 0 bytes |\n public | pgbench_tellers | table | postgres | 0 bytes |\n (4 rows)\n\n7. core dump, about 2G,\n(lldb) bt all\n* thread #1, stop reason = signal SIGSTOP\n * frame #0: 0x00007fff6c29c44e \nlibdispatch.dylib`_dispatch_mgr_queue_push + 41\n frame #1: 0x00007fff41475a74 \nSecurity`___ZN8Security12KeychainCore14StorageManager14tickleKeychainEPNS0_12KeychainImplE_block_invoke_2 \n+ 76\n frame #2: 0x00007fff6c29250e \nlibdispatch.dylib`_dispatch_client_callout + 8\n frame #3: 0x00007fff6c29e567 \nlibdispatch.dylib`_dispatch_lane_barrier_sync_invoke_and_complete + 60\n frame #4: 0x00007fff41475935 \nSecurity`Security::KeychainCore::StorageManager::tickleKeychain(Security::KeychainCore::KeychainImpl*) \n+ 485\n frame #5: 0x00007fff412400d8 \nSecurity`Security::KeychainCore::KCCursorImpl::next(Security::KeychainCore::Item&) \n+ 352\n frame #6: 0x00007fff41417975 \nSecurity`Security::KeychainCore::IdentityCursor::next(Security::SecPointer<Security::KeychainCore::Identity>&) \n+ 217\n frame #7: 0x00007fff4143c4c3 Security`SecIdentitySearchCopyNext + 155\n frame #8: 0x00007fff414477d8 \nSecurity`SecItemCopyMatching_osx(__CFDictionary const*, void const**) + 261\n frame #9: 0x00007fff4144b024 Security`SecItemCopyMatching + 338\n frame #10: 0x00007fff56dab303 Heimdal`keychain_query + 531\n frame #11: 0x00007fff56da8f4c Heimdal`hx509_certs_find + 92\n frame #12: 0x00007fff56d67b52 Heimdal`_krb5_pk_find_cert + 466\n frame #13: 0x00007fff376da9bb GSS`_gsspku2u_acquire_cred + 619\n frame #14: 0x00007fff376bfc1c GSS`gss_acquire_cred + 940\n frame #15: 0x000000010016e6e1 \nlibpq.5.dylib`pg_GSS_have_cred_cache(cred_out=0x0000000100505688) at \nfe-gssapi-common.c:67:10\n frame #16: 0x000000010014f769 \nlibpq.5.dylib`PQconnectPoll(conn=0x0000000100505310) at fe-connect.c:2785:22\n frame #17: 0x000000010014be9f \nlibpq.5.dylib`connectDBComplete(conn=0x0000000100505310) at \nfe-connect.c:2095:10\n frame #18: 0x000000010014bb0c \nlibpq.5.dylib`PQconnectdbParams(keywords=0x00007ffeefbfeee0, \nvalues=0x00007ffeefbfeea0, expand_dbname=1) at fe-connect.c:625:10\n frame #19: 0x000000010000ec20 \npg_restore`ConnectDatabase(AHX=0x0000000100505070, dbname=\"postgres\", \npghost=\"david.highgo.ca\", pgport=0x0000000000000000, username=\"david\", \nprompt_password=TRI_DEFAULT) at pg_backup_db.c:287:20\n frame #20: 0x000000010000a75a \npg_restore`CloneArchive(AH=0x00000001002020f0) at \npg_backup_archiver.c:4850:3\n frame #21: 0x0000000100017b4b \npg_restore`RunWorker(AH=0x00000001002020f0, slot=0x0000000100221718) at \nparallel.c:866:7\n frame #22: 0x00000001000179f5 \npg_restore`ParallelBackupStart(AH=0x00000001002020f0) at parallel.c:1028:4\n frame #23: 0x0000000100004473 \npg_restore`RestoreArchive(AHX=0x00000001002020f0) at \npg_backup_archiver.c:662:12\n frame #24: 0x0000000100001be4 pg_restore`main(argc=10, \nargv=0x00007ffeefbff8f0) at pg_restore.c:447:3\n frame #25: 0x00007fff6c2eb7fd libdyld.dylib`start + 1\n(lldb)\n\n8. however it works with either,\n PGGSSENCMODE=disable pg_restore -d postgres -h localhost -Fc \n/tmp/128m --jobs=2\nor,\n pg_restore -d \"dbname=postgres gssencmode=disable\" -h localhost -Fc \n/tmp/128m --jobs=2\n\n9. pg_config output and versions, no SSL configured,\n\n$ pg_config\nBINDIR = /Users/david/sandbox/pg122/app/bin\nDOCDIR = /Users/david/sandbox/pg122/app/share/doc/postgresql\nHTMLDIR = /Users/david/sandbox/pg122/app/share/doc/postgresql\nINCLUDEDIR = /Users/david/sandbox/pg122/app/include\nPKGINCLUDEDIR = /Users/david/sandbox/pg122/app/include/postgresql\nINCLUDEDIR-SERVER = /Users/david/sandbox/pg122/app/include/postgresql/server\nLIBDIR = /Users/david/sandbox/pg122/app/lib\nPKGLIBDIR = /Users/david/sandbox/pg122/app/lib/postgresql\nLOCALEDIR = /Users/david/sandbox/pg122/app/share/locale\nMANDIR = /Users/david/sandbox/pg122/app/share/man\nSHAREDIR = /Users/david/sandbox/pg122/app/share/postgresql\nSYSCONFDIR = /Users/david/sandbox/pg122/app/etc/postgresql\nPGXS = \n/Users/david/sandbox/pg122/app/lib/postgresql/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--with-gssapi' '--prefix=/Users/david/sandbox/pg122/app' \n'--enable-debug' 'CFLAGS=-ggdb -O0 -fno-omit-frame-pointer'\nCC = gcc\nCPPFLAGS = -isysroot \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Werror=vla -Wendif-labels \n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing \n-fwrapv -Wno-unused-command-line-argument -g -ggdb -O0 \n-fno-omit-frame-pointer\nCFLAGS_SL =\nLDFLAGS = -Wl,-dead_strip_dylibs\nLDFLAGS_EX =\nLDFLAGS_SL =\nLIBS = -lpgcommon -lpgport -lgssapi_krb5 -lz -lreadline -lm\nVERSION = PostgreSQL 12.2\n\n$ lldb --version\nlldb-1100.0.30.12\nApple Swift version 5.1.3 (swiftlang-1100.0.282.1 clang-1100.0.33.15)\n\n$ klist --version\nklist (Heimdal 1.5.1apple1)\nCopyright 1995-2011 Kungliga Tekniska Högskolan\nSend bug-reports to heimdal-bugs@h5l.org\n\nHopefully the above information can help.\n\n\nOn 2019-12-04 6:03 a.m., Tom Lane wrote:\n> PG Bug reporting form <noreply@postgresql.org> writes:\n>> The following bug has been logged on the website:\n>> Bug reference: 16147\n>> Logged by: Bill Tihen\n>> Email address: btihen@gmail.com\n>> PostgreSQL version: 12.1\n>> Operating system: MacOS 10.15.1\n>> Description:\n>> The following command crashes with any database I've tried (both large and\n>> small) DBs:\n>> `pg_restore -U wti0405 -d stage3 -h localhost --jobs=8 -Fc\n>> database_12_04-01-00.bak -x`\n> I failed to reproduce this on my own 10.15.1 laptop, using manual\n> builds of either HEAD or the v12 branch. Plausible reasons for\n> the difference in results might include:\n>\n> * There's something different about the homebrew build (could we\n> see the output of pg_config?)\n>\n> * There's something unusual about your configuration (one thought\n> that comes to mind: do you have SSL turned on for localhost\n> connections?)\n>\n> * There's something about the data in this specific database\n> (your report that it happens for multiple databases puts a crimp\n> in this idea, though maybe they all share a common feature)\n>\n> Anyway, we need more info to investigate. You might try looking\n> into the server log to see what the failure looks like from that\n> side --- is there a query error, or just the worker disconnecting\n> unexpectedly?\n>\n> \t\t\tregards, tom lane\n>\n>\n>\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n",
"msg_date": "Thu, 5 Mar 2020 19:53:35 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h\n localhost --jobs=2 crashes"
},
{
"msg_contents": "I did some of the digging here. The issue still exists in the latest\ngit master. A git bisect pointed me to commit\n548e50976ce721b5e927d42a105c2f05b51b52a6 which did touch the\npg_restore work. The bug doesn't seem to be with the pg_dump -Fc\nformat as I was using an older dump version to do the bisect and it\ncould still trigger the issue in recent PostgreSQL. I have noticed\nthis issue on several Macs. Doing some Google searching you also see\npeople complaining about it on Mac:\n\nhttps://dba.stackexchange.com/questions/257398/pg-restore-with-jobs-flag-results-in-pg-restore-error-a-worker-process-di\n\nhttps://github.com/thoughtbot/parity/issues/175\n\nmentioning things like Homebrew and Postgres.app that are only on Mac.\nThe recommendation seems to be to go back to pg_restore from\npostgresql 11.\n\nI tried a bit of sleuthing here. I've attached a file\ntoc_contents.txt which is the pg_restore -l output of my dump file\nwith some relevant bits grepped. I've attached a file\nprintf_output.txt that has some manual printfs shoved into pg_restore\nto try and see what's going on. Note that I dropped all but the last\nfew restore_toc_entry calls. And finally I attached printf.patch so\nyou can see what I'm logging. The issue seems to be that pg_restore\nis trying to find a TOC that was earlier in the file than the current\noffset, never finds that TOC and fails with the \"possibly due to\nout-of-order restore request\". I've run pg_restore with these patches\na few times now and it always fails in the same way (never seeks up in\nthe file) but in different places each time.\n\nTom, if you or anyone else with PostgreSQL would appreciate the\npg_dump file I can send it to you out of band, it's only a few\nmegabytes. I have pg_restore with debug symbols too if you want me to\ntry anything.\n\n\nOn Fri, May 15, 2020 at 9:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> PG Bug reporting form <noreply@postgresql.org> writes:\n> > The following bug has been logged on the website:\n> > Bug reference: 16147\n> > Logged by: Bill Tihen\n> > Email address: btihen@gmail.com\n> > PostgreSQL version: 12.1\n> > Operating system: MacOS 10.15.1\n> > Description:\n>\n> > The following command crashes with any database I've tried (both large and\n> > small) DBs:\n> > `pg_restore -U wti0405 -d stage3 -h localhost --jobs=8 -Fc\n> > database_12_04-01-00.bak -x`\n>\n> I failed to reproduce this on my own 10.15.1 laptop, using manual\n> builds of either HEAD or the v12 branch. Plausible reasons for\n> the difference in results might include:\n>\n> * There's something different about the homebrew build (could we\n> see the output of pg_config?)\n>\n> * There's something unusual about your configuration (one thought\n> that comes to mind: do you have SSL turned on for localhost\n> connections?)\n>\n> * There's something about the data in this specific database\n> (your report that it happens for multiple databases puts a crimp\n> in this idea, though maybe they all share a common feature)\n>\n> Anyway, we need more info to investigate. You might try looking\n> into the server log to see what the failure looks like from that\n> side --- is there a query error, or just the worker disconnecting\n> unexpectedly?\n>\n> regards, tom lane\n>\n>\n>\n>\n\n\n-- \nDavid Gilman\n:DG<",
"msg_date": "Fri, 15 May 2020 09:53:29 -0400",
"msg_from": "David Gilman <davidgilman1@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h\n localhost --jobs=2 crashes"
},
{
"msg_contents": "On Thu, Mar 05, 2020 at 07:53:35PM -0800, David Zhang wrote:\n> I can reproduce this pg_restore crash issue (pg_dump crash too when running\n> with multiple jobs) on MacOS 10.14 Mojave and MacOS 10.15 Catalina using\n> following steps.\n\nIsn't this the same as here?\nhttps://www.postgresql.org/message-id/flat/16041-b44f9931ad91fc3d%40postgresql.org\n..concluding that macos library fails after forking.\n\n..I found that via: https://github.com/PostgresApp/PostgresApp/issues/538\n=> https://www.postgresql.org/message-id/1575881854624-0.post%40n3.nabble.com\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 18 May 2020 21:03:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h\n localhost --jobs=2 crashes"
},
{
"msg_contents": "It looks like there is a whole bunch of problems here:\n\n- Bill Tihen's original post about restoring over sockets\n- sailor's post about \"out-of-order restore request\" failure\n- David Zhang's crash inside of GSS\n\nI was running into the \"out-of-order restore request\" issue. My post\nabove was the first half of my investigations and was incorrect. The\n\"out-of-order restore request\" comes from custom dumps who were\nwritten to a non-seekable file descriptor. They silently lack certain\nmetadata and can't be used for parallel restores in PostgreSQL 12 or\nlater. If you're running into \"out-of-order restore request\" errors\ntry running pg_dump with the -f flag to force it to create a dump file\nwith the correct metadata. If you can't change how the dumps are made\nyou can't use parallel pg_restore with them. There's isn't a code fix\nto this problem, everything is working as expected, but I've sent in a\npatch to help document the behavior.\nhttps://www.postgresql.org/message-id/CALBH9DDuJ%2BscZc4MEvw5uO-%3DvRyR2%3DQF9%2BYh%3D3hPEnKHWfS81A%40mail.gmail.com\n\n\nOn Mon, May 18, 2020 at 10:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 05, 2020 at 07:53:35PM -0800, David Zhang wrote:\n> > I can reproduce this pg_restore crash issue (pg_dump crash too when running\n> > with multiple jobs) on MacOS 10.14 Mojave and MacOS 10.15 Catalina using\n> > following steps.\n>\n> Isn't this the same as here?\n> https://www.postgresql.org/message-id/flat/16041-b44f9931ad91fc3d%40postgresql.org\n> ..concluding that macos library fails after forking.\n>\n> ..I found that via: https://github.com/PostgresApp/PostgresApp/issues/538\n> => https://www.postgresql.org/message-id/1575881854624-0.post%40n3.nabble.com\n>\n> --\n> Justin\n>\n>\n\n\n-- \nDavid Gilman\n:DG<\n\n\n",
"msg_date": "Tue, 19 May 2020 20:52:25 -0400",
"msg_from": "David Gilman <davidgilman1@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16147: postgresql 12.1 (from homebrew) - pg_restore -h\n localhost --jobs=2 crashes"
}
] |
[
{
"msg_contents": "Run the regression tests with \"make installcheck\", then:\n\n$ pg_dump -Fc regression >r.dump\n$ createdb r2\n$ pg_restore -d r2 r.dump\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 6005; 2604 24821 DEFAULT gtest1_1 b postgres\npg_restore: error: could not execute query: ERROR: column \"b\" of relation \"gtest1_1\" is a generated column\nCommand was: ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n\n\npg_restore: warning: errors ignored on restore: 1\n$ \n\nIt looks like gtest1_1 inherits column \"b\" from gtest1, so possibly\npg_dump is just confused about the combination of inheritance and\ngenerated columns.\n\nI see this in v12 as well as HEAD. One interesting question is how come\nthe pg_upgrade test isn't failing --- maybe binary-upgrade mode handles\nthis case differently?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 09:14:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2019-12-04 15:14, Tom Lane wrote:\n> Run the regression tests with \"make installcheck\", then:\n> \n> $ pg_dump -Fc regression >r.dump\n> $ createdb r2\n> $ pg_restore -d r2 r.dump\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 6005; 2604 24821 DEFAULT gtest1_1 b postgres\n> pg_restore: error: could not execute query: ERROR: column \"b\" of relation \"gtest1_1\" is a generated column\n> Command was: ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n> \n> \n> pg_restore: warning: errors ignored on restore: 1\n> $\n> \n> It looks like gtest1_1 inherits column \"b\" from gtest1, so possibly\n> pg_dump is just confused about the combination of inheritance and\n> generated columns.\n\nYeah, there was some stuff about the \"separate\" dumping of defaults that \nI apparently forgot to address. The attached patch fixes it. I'll see \nabout adding a test case for it, too.\n\n> I see this in v12 as well as HEAD. One interesting question is how come\n> the pg_upgrade test isn't failing --- maybe binary-upgrade mode handles\n> this case differently?\n\nBinary upgrade dumps out even inherited columns, so it won't run into \nthe \"separate\" case that's the issue here.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Dec 2019 21:17:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-12-04 15:14, Tom Lane wrote:\n>> It looks like gtest1_1 inherits column \"b\" from gtest1, so possibly\n>> pg_dump is just confused about the combination of inheritance and\n>> generated columns.\n\n> Yeah, there was some stuff about the \"separate\" dumping of defaults that \n> I apparently forgot to address. The attached patch fixes it. I'll see \n> about adding a test case for it, too.\n\nI don't think this is right. It will probably misbehave if the\n\"generated\" expression has any interesting dependencies:\n\n1. You didn't duplicate the behavior of the existing separate=false\ncase, where it adds a dependency to try to force the default's\ndependencies to exist before the table is created.\n\n2. If that dependency turns out to create a dependency loop, the\nlater code will break the loop by setting separate=true anyway.\nThen what?\n\nI also find it improbable that overriding the !shouldPrintColumn\ntest is really the right thing. That test is what distinguishes\nthe is-a-parent-table from the is-a-child-table cases, and the\ncore of the issue here seems to be that we need to treat those\ndifferently.\n\nI wonder if the right fix is to not generate a DO_ATTRDEF\nobject at all for generated columns in child tables. Am\nI right in guessing that we propagate generated-ness to\nchild tables automatically?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 15:36:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2019-12-04 21:36, Tom Lane wrote:\n> I wonder if the right fix is to not generate a DO_ATTRDEF\n> object at all for generated columns in child tables. Am\n> I right in guessing that we propagate generated-ness to\n> child tables automatically?\n\nRight. New patch using that approach attached. (Could use more \nextensive comments.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 3 Feb 2020 19:58:43 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-12-04 21:36, Tom Lane wrote:\n>> I wonder if the right fix is to not generate a DO_ATTRDEF\n>> object at all for generated columns in child tables. Am\n>> I right in guessing that we propagate generated-ness to\n>> child tables automatically?\n\n> Right. New patch using that approach attached. (Could use more \n> extensive comments.)\n\nThis looks more plausible than the previous attempt, but it's clearly\nstill not right, because this is what it changes in the regression\ntest dump:\n\n--- r.dump.head\t2020-02-03 14:16:15.774305437 -0500\n+++ r.dump.patch\t2020-02-03 14:18:08.599109703 -0500\n@@ -15244,14 +15244,7 @@\n -- Name: gtest1_1 b; Type: DEFAULT; Schema: public; Owner: postgres\n --\n \n-ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n-\n-\n---\n--- Name: gtest30_1 b; Type: DEFAULT; Schema: public; Owner: postgres\n---\n-\n-ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n+ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT NULL;\n \n \n --\n\nThis is showing us at least two distinct problems. Now as for\n\"gtest30_1\", what we have is that in the parent table \"gtest30\", column b\nexists but it has no default; the generated property is only added\nat the child table gtest30_1. So we need to emit ALTER COLUMN SET\nGENERATED ALWAYS for gtest30_1.b. HEAD is already doing the wrong\nthing there (it's emitting the expression, but as a plain default\nnot GENERATED). And this patch makes it emit nothing, even worse.\nI think the key point here is that \"attislocal\" refers to whether the\ncolumn itself is locally defined, not to whether its default is.\n\nThings are evidently also going wrong for \"gtest1_1\". In that case\nthe generated property is inherited from the parent gtest1, so we\nshouldn't be emitting anything ... how come the patch fails to\nmake it do that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Feb 2020 14:32:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2020-02-03 20:32, Tom Lane wrote:\n > Things are evidently also going wrong for \"gtest1_1\". In that case\n > the generated property is inherited from the parent gtest1, so we\n > shouldn't be emitting anything ... how come the patch fails to\n > make it do that?\n\nThis is fixed by the attached new patch. It needed an additional check \nin flagInhAttrs().\n\n> This is showing us at least two distinct problems. Now as for\n> \"gtest30_1\", what we have is that in the parent table \"gtest30\", column b\n> exists but it has no default; the generated property is only added\n> at the child table gtest30_1. So we need to emit ALTER COLUMN SET\n> GENERATED ALWAYS for gtest30_1.b. HEAD is already doing the wrong\n> thing there (it's emitting the expression, but as a plain default\n> not GENERATED). And this patch makes it emit nothing, even worse.\n> I think the key point here is that \"attislocal\" refers to whether the\n> column itself is locally defined, not to whether its default is.\n\nThis is a bit of a mess. Let me explain my thinking on generated \ncolumns versus inheritance.\n\nIf a parent table has a generated column, then any inherited column must \nalso be generated and use the same expression. (Otherwise querying the \nparent table would produce results that are inconsistent with the \ngeneration expression if the rows come from the child table.)\n\nIf a parent table has a column that is not generated, then I think it \nwould be semantically sound if a child table had that same column but \ngenerated. However, I think it would be very tricky to support this \ncorrectly, and it doesn't seem useful enough, so I'd rather not do it.\n\nThat's what the gtest30_1 case above shows. It's not even clear whether \nit's possible to dump this correctly in all cases because the syntax \nthat you allude to \"turn this existing column into a generated column\" \ndoes not exist.\n\nNote that the gtest30 test case is new in master. I'm a bit confused \nwhy things were done that way, and I'll need to revisit this. I've also \nfound a few more issues with how certain combinations of DDL can create \nsimilar situations that arguably don't make sense, and I'll continue to \nlook into those. Basically, my contention is that gtest30_1 should not \nbe allowed to exist like that.\n\nHowever, the pg_dump issue is separate from those because it affects a \ncase that is clearly legitimate. So assuming that we end up agreeing on \na version of the attached pg_dump patch, I would like to get that into \nthe next minor releases and then investigate the other issues separately.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 6 Feb 2020 20:36:24 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-02-03 20:32, Tom Lane wrote:\n>> This is showing us at least two distinct problems. Now as for\n>> \"gtest30_1\", what we have is that in the parent table \"gtest30\", column b\n>> exists but it has no default; the generated property is only added\n>> at the child table gtest30_1. So we need to emit ALTER COLUMN SET\n>> GENERATED ALWAYS for gtest30_1.b. HEAD is already doing the wrong\n>> thing there (it's emitting the expression, but as a plain default\n>> not GENERATED). And this patch makes it emit nothing, even worse.\n>> I think the key point here is that \"attislocal\" refers to whether the\n>> column itself is locally defined, not to whether its default is.\n\n> This is a bit of a mess. Let me explain my thinking on generated \n> columns versus inheritance.\n\n> If a parent table has a generated column, then any inherited column must \n> also be generated and use the same expression. (Otherwise querying the \n> parent table would produce results that are inconsistent with the \n> generation expression if the rows come from the child table.)\n\nCheck.\n\n> If a parent table has a column that is not generated, then I think it \n> would be semantically sound if a child table had that same column but \n> generated. However, I think it would be very tricky to support this \n> correctly, and it doesn't seem useful enough, so I'd rather not do it.\n\nSo ... why is that so hard exactly? AFAICS, the existing regression\ntest cases show that it works fine. Except that pg_dump gets it wrong.\nIn general, we surely want to support child columns that have defaults\ndifferent from the parent column's default, so this doesn't seem quite\nthat huge a leap to me.\n\n> That's what the gtest30_1 case above shows. It's not even clear whether \n> it's possible to dump this correctly in all cases because the syntax \n> that you allude to \"turn this existing column into a generated column\" \n> does not exist.\n\nI'm a little confused by that statement. What is this doing, if not\nthat:\n\nregression=# create table foo (f1 int not null);\nCREATE TABLE\nregression=# alter table foo alter column f1 add generated always as identity;\nALTER TABLE\nregression=# \\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+------------------------------\n f1 | integer | | not null | generated always as identity\n\nIf we didn't have things like ALTER ... SET GENERATED and\nALTER ... DROP EXPRESSION, I'd be a lot more content to accept\nthe position that generated-ness is an immutable column property.\nBut we do have those things, so the restriction you're proposing\nseems mighty arbitrary.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Feb 2020 17:16:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "I arrived at this thread while investigating the same issue recently\nreported[1].\n\nOn Fri, 7 Feb 2020 at 04:36, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-02-03 20:32, Tom Lane wrote:\n> > Things are evidently also going wrong for \"gtest1_1\". In that case\n> > the generated property is inherited from the parent gtest1, so we\n> > shouldn't be emitting anything ... how come the patch fails to\n> > make it do that?\n>\n> This is fixed by the attached new patch. It needed an additional check\n> in flagInhAttrs().\n>\n> > This is showing us at least two distinct problems. Now as for\n> > \"gtest30_1\", what we have is that in the parent table \"gtest30\", column b\n> > exists but it has no default; the generated property is only added\n> > at the child table gtest30_1. So we need to emit ALTER COLUMN SET\n> > GENERATED ALWAYS for gtest30_1.b. HEAD is already doing the wrong\n> > thing there (it's emitting the expression, but as a plain default\n> > not GENERATED). And this patch makes it emit nothing, even worse.\n> > I think the key point here is that \"attislocal\" refers to whether the\n> > column itself is locally defined, not to whether its default is.\n>\n> This is a bit of a mess. Let me explain my thinking on generated\n> columns versus inheritance.\n>\n> If a parent table has a generated column, then any inherited column must\n> also be generated and use the same expression. (Otherwise querying the\n> parent table would produce results that are inconsistent with the\n> generation expression if the rows come from the child table.)\n\nAfter investigating this issue, I think that current DDLs regarding\ninherited tables and generated columns seem not to work fine.\n\nWe can make an inherited table have the same column having a different\ngeneration expression as follows:\n\n=# create table p1 (a int, b int generated always as (a + 1) stored);\n=# create table c1 (a int, b int generated always as (a + 2) stored)\ninherits(p1);\n\nBut the column on the inherited table has a default value, the column\nwill be generation expression with a const value:\n\n=# create table p2 (a int, b int generated always as (a + 1) stored);\n=# create table c2 (a int, b int default 10) inherits(p2);\n=# \\d c2\n Table \"public.c2\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------------------------------\n a | integer | | |\n b | integer | | | generated always as (10) stored\nInherits: p2\n\nAlso, CREATE TABLE doesn't support to create a generated column on\ninherited table, which is the same name but is a normal column on the\nparent table, as follows:\n\n=# create table p3 (a int, b int);\n=# create table c3 (a int, b int generated always as (a + 2) stored)\ninherits(p3);\nERROR: cannot use column reference in DEFAULT expression\nLINE 1: ...reate table c3 (a int, b int generated always as (a + 2) sto...\n\nAside from the error message seems not correct, it's actually possible\nthat we can have only the inherited table's column have a generation\nexpression by:\n\n=# create table p4 (a int, b int);\n=# create table c4 (a int);\n=# alter table c4 add column b int generated always as (a * 3) stored;\n=# alter table c4 inherit p4;\n\nBecause of this behavior, pg_dump generates a query for the table c4\nthat cannot be restored.\n\nI think we can fix these issues with the attached patch but it seems\nbetter discussing the desired behavior first.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/2678bad1-048f-519a-ef24-b12962f41807@enterprisedb.com\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 23 Apr 2020 15:35:18 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2020-04-23 08:35, Masahiko Sawada wrote:\n> After investigating this issue, I think that current DDLs regarding\n> inherited tables and generated columns seem not to work fine.\n\nRight, there were a number of combinations that were not properly \nhandled. The attached patch should fix them all. It's made against \nPG12 but also works on master. See contained commit message and \ndocumentation for details.\n\n(This does not touch the issues with pg_dump, but it helps clarify the \ncases that pg_dump needs to handle.)\n\n> We can make an inherited table have the same column having a different\n> generation expression as follows:\n> \n> =# create table p1 (a int, b int generated always as (a + 1) stored);\n> =# create table c1 (a int, b int generated always as (a + 2) stored)\n> inherits(p1);\n\nWith my patch, this becomes an error.\n\n> But the column on the inherited table has a default value, the column\n> will be generation expression with a const value:\n> \n> =# create table p2 (a int, b int generated always as (a + 1) stored);\n> =# create table c2 (a int, b int default 10) inherits(p2);\n> =# \\d c2\n> Table \"public.c2\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------------------------------\n> a | integer | | |\n> b | integer | | | generated always as (10) stored\n> Inherits: p2\n\nWith my patch, this also becomes an error.\n\n> Also, CREATE TABLE doesn't support to create a generated column on\n> inherited table, which is the same name but is a normal column on the\n> parent table, as follows:\n> \n> =# create table p3 (a int, b int);\n> =# create table c3 (a int, b int generated always as (a + 2) stored)\n> inherits(p3);\n> ERROR: cannot use column reference in DEFAULT expression\n> LINE 1: ...reate table c3 (a int, b int generated always as (a + 2) sto...\n\nThis is allowed with my patch (which is basically an expanded version of \nyour patch).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 6 May 2020 16:29:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2020-05-06 16:29, Peter Eisentraut wrote:\n> On 2020-04-23 08:35, Masahiko Sawada wrote:\n>> After investigating this issue, I think that current DDLs regarding\n>> inherited tables and generated columns seem not to work fine.\n> \n> Right, there were a number of combinations that were not properly\n> handled. The attached patch should fix them all. It's made against\n> PG12 but also works on master. See contained commit message and\n> documentation for details.\n\ncommitted to master and PG12\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 8 May 2020 11:47:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Right, there were a number of combinations that were not properly\n>> handled. The attached patch should fix them all. It's made against\n>> PG12 but also works on master. See contained commit message and\n>> documentation for details.\n\n> committed to master and PG12\n\nSo ... this did not actually fix the dump/restore problem. In fact,\nit's worse, because in HEAD I see two failures not one when doing the\nsame test proposed at the start of this thread:\n\n1. make installcheck\n2. pg_dump -Fc regression >r.dump\n3. createdb r2\n4. pg_restore -d r2 r.dump\n\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 6253; 2604 226187 DEFAULT gtest1_1 b postgres\npg_restore: error: could not execute query: ERROR: column \"b\" of relation \"gtest1_1\" is a generated column\nCommand was: ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n\n\npg_restore: from TOC entry 6279; 2604 227276 DEFAULT gtest30_1 b postgres\npg_restore: error: could not execute query: ERROR: cannot use column reference in DEFAULT expression\nCommand was: ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n\n\npg_restore: warning: errors ignored on restore: 2\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jul 2020 15:29:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On Thu, 16 Jul 2020 at 04:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >> Right, there were a number of combinations that were not properly\n> >> handled. The attached patch should fix them all. It's made against\n> >> PG12 but also works on master. See contained commit message and\n> >> documentation for details.\n>\n> > committed to master and PG12\n>\n> So ... this did not actually fix the dump/restore problem. In fact,\n> it's worse, because in HEAD I see two failures not one when doing the\n> same test proposed at the start of this thread:\n>\n> 1. make installcheck\n> 2. pg_dump -Fc regression >r.dump\n> 3. createdb r2\n> 4. pg_restore -d r2 r.dump\n>\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 6253; 2604 226187 DEFAULT gtest1_1 b postgres\n> pg_restore: error: could not execute query: ERROR: column \"b\" of relation \"gtest1_1\" is a generated column\n> Command was: ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n>\n>\n> pg_restore: from TOC entry 6279; 2604 227276 DEFAULT gtest30_1 b postgres\n> pg_restore: error: could not execute query: ERROR: cannot use column reference in DEFAULT expression\n> Command was: ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n>\n>\n> pg_restore: warning: errors ignored on restore: 2\n>\n\nThe minimum reproducer is:\n\ncreate table a (a int, b int generated always as (a * 2) stored);\ncreate table aa () inherits (a);\n\npg_dump produces the following DDLs:\n\nCREATE TABLE public.a (\n a integer,\n b integer GENERATED ALWAYS AS ((a * 2)) STORED\n);\n\nCREATE TABLE public.aa (\n)\nINHERITS (public.a);\n\nALTER TABLE ONLY public.aa ALTER COLUMN b SET DEFAULT (a * 2);\n\nHowever, the ALTER TABLE fails.\n\nBy commit 086ffddf, the child tables must have the same generation\nexpression as the expression defined in the parent. So I think pg_dump\nshould not generate the last DDL. I've attached the patch fixing this\nissue.\n\nApart from the fix, I wonder if we can add a test that dumps the\ndatabase where executed 'make check' and restore it to another\ndatabase.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 23 Jul 2020 19:55:18 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On Thu, 23 Jul 2020 at 19:55, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 16 Jul 2020 at 04:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > >> Right, there were a number of combinations that were not properly\n> > >> handled. The attached patch should fix them all. It's made against\n> > >> PG12 but also works on master. See contained commit message and\n> > >> documentation for details.\n> >\n> > > committed to master and PG12\n> >\n> > So ... this did not actually fix the dump/restore problem. In fact,\n> > it's worse, because in HEAD I see two failures not one when doing the\n> > same test proposed at the start of this thread:\n> >\n> > 1. make installcheck\n> > 2. pg_dump -Fc regression >r.dump\n> > 3. createdb r2\n> > 4. pg_restore -d r2 r.dump\n> >\n> > pg_restore: while PROCESSING TOC:\n> > pg_restore: from TOC entry 6253; 2604 226187 DEFAULT gtest1_1 b postgres\n> > pg_restore: error: could not execute query: ERROR: column \"b\" of relation \"gtest1_1\" is a generated column\n> > Command was: ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n> >\n> >\n> > pg_restore: from TOC entry 6279; 2604 227276 DEFAULT gtest30_1 b postgres\n> > pg_restore: error: could not execute query: ERROR: cannot use column reference in DEFAULT expression\n> > Command was: ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n> >\n> >\n> > pg_restore: warning: errors ignored on restore: 2\n> >\n>\n> The minimum reproducer is:\n>\n> create table a (a int, b int generated always as (a * 2) stored);\n> create table aa () inherits (a);\n>\n> pg_dump produces the following DDLs:\n>\n> CREATE TABLE public.a (\n> a integer,\n> b integer GENERATED ALWAYS AS ((a * 2)) STORED\n> );\n>\n> CREATE TABLE public.aa (\n> )\n> INHERITS (public.a);\n>\n> ALTER TABLE ONLY public.aa ALTER COLUMN b SET DEFAULT (a * 2);\n>\n> However, the ALTER TABLE fails.\n>\n> By commit 086ffddf, the child tables must have the same generation\n> expression as the expression defined in the parent. So I think pg_dump\n> should not generate the last DDL. I've attached the patch fixing this\n> issue.\n>\n> Apart from the fix, I wonder if we can add a test that dumps the\n> database where executed 'make check' and restore it to another\n> database.\n>\n\nThis issue is not fixed yet. I've attached the updated version patch\nand registered it to commit fest so as not to forget. Please review\nit.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 27 Aug 2020 20:30:02 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "I have been analyzing this issue again. We have a few candidate patches \nthat do very similar things for avoiding dumping the generation \nexpression of table gtest1_1. We can figure out later which one of \nthese we like best. But there is another issue lurking nearby. The \ntable hierarchy of gtest30, which is created in the regression tests \nlike this:\n\nCREATE TABLE gtest30 (\n a int,\n b int GENERATED ALWAYS AS (a * 2) STORED\n);\nCREATE TABLE gtest30_1 () INHERITS (gtest30);\nALTER TABLE ONLY gtest30 ALTER COLUMN b DROP EXPRESSION;\n\nThis drops the generation expression from the parent table but not the \nchild table. This is currently dumped like this:\n\nCREATE TABLE public.gtest30 (\n a integer,\n b integer\n);\n\nCREATE TABLE public.gtest30_1 (\n)\nINHERITS (public.gtest30);\n\nALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n\nThe proposed patches will cause the last statement to be omitted, but \nthat still won't recreate the original state. The problem is that there \nis no command to make a column generated afterwards, like the SET \nDEFAULT command, so we can't dump it like this. We would have to produce\n\nCREATE TABLE public.gtest30 (\n a integer,\n b integer\n);\n\nCREATE TABLE public.gtest30_1 (\n b integer GENERATED ALWAYS AS (a * 2) STORED\n)\nINHERITS (public.gtest30);\n\nbut this will create the column \"b\" of gtest30_1 as attlocal, which the \noriginal command sequence does not.\n\nWe could probably fix this by having ALTER TABLE ONLY / DROP EXPRESSION \nupdate the attlocal column of direct children to true, to make the \ncatalog state look like something that can be restored. However, that's \na fair amount of complicated code, so for now I propose to just prohibit \nthis command, meaning you can't use ONLY in this command if there are \nchildren. This is new in PG13, so this change would have very limited \nimpact in practice.\n\nProposed patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 25 Sep 2020 15:07:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "> On 25 Sep 2020, at 15:07, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> We could probably fix this by having ALTER TABLE ONLY / DROP EXPRESSION update the attlocal column of direct children to true, to make the catalog state look like something that can be restored. However, that's a fair amount of complicated code, so for now I propose to just prohibit this command, meaning you can't use ONLY in this command if there are children. This is new in PG13, so this change would have very limited impact in practice.\n\nThat sounds a bit dramatic. Do you propose to do that in v13 as well or just\nin HEAD? If the latter, considering that the window until the 14 freeze is\nquite wide shouldn't we try to fix it first?\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 25 Sep 2020 15:51:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The proposed patches will cause the last statement to be omitted, but \n> that still won't recreate the original state. The problem is that there \n> is no command to make a column generated afterwards, like the SET \n> DEFAULT command, so we can't dump it like this.\n\nRight. I'm not even sure what such a command should do ... would it run\nthrough all existing rows and update them to be the GENERATED value?\n\n> We could probably fix this by having ALTER TABLE ONLY / DROP EXPRESSION \n> update the attlocal column of direct children to true, to make the \n> catalog state look like something that can be restored. However, that's \n> a fair amount of complicated code, so for now I propose to just prohibit \n> this command, meaning you can't use ONLY in this command if there are \n> children. This is new in PG13, so this change would have very limited \n> impact in practice.\n\n+1. At this point we would want some fairly un-complicated fix for\nthe v13 branch anyhow, and this seems to fit the bill. (Also, having\nchild columns suddenly grow an attislocal property doesn't seem to meet\nthe principle of least surprise.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Sep 2020 10:16:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On Fri, 25 Sep 2020 at 22:07, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> I have been analyzing this issue again. We have a few candidate patches\n> that do very similar things for avoiding dumping the generation\n> expression of table gtest1_1. We can figure out later which one of\n> these we like best. But there is another issue lurking nearby. The\n> table hierarchy of gtest30, which is created in the regression tests\n> like this:\n>\n> CREATE TABLE gtest30 (\n> a int,\n> b int GENERATED ALWAYS AS (a * 2) STORED\n> );\n> CREATE TABLE gtest30_1 () INHERITS (gtest30);\n> ALTER TABLE ONLY gtest30 ALTER COLUMN b DROP EXPRESSION;\n>\n> This drops the generation expression from the parent table but not the\n> child table. This is currently dumped like this:\n>\n> CREATE TABLE public.gtest30 (\n> a integer,\n> b integer\n> );\n>\n> CREATE TABLE public.gtest30_1 (\n> )\n> INHERITS (public.gtest30);\n>\n> ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n>\n> The proposed patches will cause the last statement to be omitted, but\n> that still won't recreate the original state. The problem is that there\n> is no command to make a column generated afterwards, like the SET\n> DEFAULT command, so we can't dump it like this. We would have to produce\n>\n> CREATE TABLE public.gtest30 (\n> a integer,\n> b integer\n> );\n>\n> CREATE TABLE public.gtest30_1 (\n> b integer GENERATED ALWAYS AS (a * 2) STORED\n> )\n> INHERITS (public.gtest30);\n>\n> but this will create the column \"b\" of gtest30_1 as attlocal, which the\n> original command sequence does not.\n>\n> We could probably fix this by having ALTER TABLE ONLY / DROP EXPRESSION\n> update the attlocal column of direct children to true, to make the\n> catalog state look like something that can be restored. However, that's\n> a fair amount of complicated code, so for now I propose to just prohibit\n> this command, meaning you can't use ONLY in this command if there are\n> children. This is new in PG13, so this change would have very limited\n> impact in practice.\n>\n> Proposed patch attached.\n\n+1\n\nIf we have ALTER TABLE ONLY / DROP EXPRESSION update the attlocal\ncolumn of children to true to fix the issue you raised, my proposed\npatch is not necessary. OTOH if we fix it by prohibiting the command,\nI guess we need both patches to fix the issues.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 29 Sep 2020 07:36:54 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "[ Pulling Daniel into this older thread seems like the cleanest way to\n unify the two threads ]\n\nMasahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n> If we have ALTER TABLE ONLY / DROP EXPRESSION update the attlocal\n> column of children to true to fix the issue you raised, my proposed\n> patch is not necessary. OTOH if we fix it by prohibiting the command,\n> I guess we need both patches to fix the issues.\n\nRight, Peter already mentioned that we need a further pg_dump fix on\ntop of prohibiting the ONLY ... DROP EXPRESSION case.\n\nBug #16622 [1] is another report of this same issue, and in that thread,\nDaniel argues that the right fix is just\n\n+\t/*\n+\t * Skip if the column isn't local to the table's definition as the attrdef\n+\t * will be printed with the inheritance parent table definition\n+\t */\n+\tif (!tbinfo->attislocal[adnum - 1])\n+\t\treturn;\n\nwithout the attgenerated clause that Masahiko-san proposes.\nI think however that that's wrong. It is possible to have\na non-attislocal column that has its own default, because\nyou can do\n\nd3=# create table parent (f1 int default 2);\nCREATE TABLE\nd3=# create table child (f1 int default 3) inherits(parent);\nNOTICE: merging column \"f1\" with inherited definition\nCREATE TABLE\nd3=# create table child2() inherits(parent);\nCREATE TABLE\nd3=# alter table child2 alter column f1 set default 42;\nALTER TABLE\n\nThis does not cause child2.f1's attislocal property to become\ntrue. Maybe it should have, but it's probably too late for\nthat; at least, pg_dump couldn't assume it's true in older\nservers. Therefore, since we can't tell whether the default\nis inherited or not, we'd better dump it.\n\n(I recall that pg_dump has a hack somewhere that checks for\ntextual equality of CHECK conditions to avoid dumping redundant\nchild copies. Maybe we could do something similar here.)\n\nThe situation is different for GENERATED columns, since we disallow\na child having a different GENERATED property than the parent.\nHowever, I think Masahiko-san's patch is not quite right either,\nbecause a column can be both inherited and local. An example is\n\nd3=# create table pgen (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\nCREATE TABLE\nd3=# create table cgen1 (b int) inherits (pgen);\nNOTICE: moving and merging column \"b\" with inherited definition\nDETAIL: User-specified column moved to the position of the inherited column.\nCREATE TABLE\nd3=# select attname, attislocal, attinhcount from pg_attribute where attrelid = 'cgen1'::regclass and attnum>0;\n attname | attislocal | attinhcount \n---------+------------+-------------\n a | f | 1\n b | t | 1\n(2 rows)\n\nSo it appears to me that a more correct coding is\n\n /*\n * Do not print a GENERATED default for an inherited column; it will\n * be inherited from the parent, and the backend won't accept a\n * command to set it separately.\n */\n if (tbinfo->attinhcount[adnum - 1] > 0 && tbinfo->attgenerated[adnum - 1])\n return;\n\nUnfortunately this has still got a problem: it will mishandle the case of\na child column that is GENERATED while its parent is not. Peter opined\nway upthread that we should not allow that, but according to my testing\nwe do.\n\nThis'd all be a lot cleaner if defaults were marked as to whether they\nwere inherited or locally generated. Maybe we ought to work on that.\n\nIn the meantime, maybe the best bet is for pg_dump to try to detect\nwhether a default is identical to a parent default, and not dump it\nif so. That would fix both the GENERATED case where we must not\ndump it, and the non-GENERATED case where it's merely inefficient\nto do so.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16622-779a79851b4e2491%40postgresql.org\n\n\n",
"msg_date": "Tue, 29 Sep 2020 12:37:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "I wrote:\n> The situation is different for GENERATED columns, since we disallow\n> a child having a different GENERATED property than the parent.\n\nBTW, that alleged prohibition is pretty damn leaky:\n\nd3=# create table pp1 (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\nCREATE TABLE\nd3=# create table cc1 (a int, b int GENERATED ALWAYS AS (a * 3) STORED);\nCREATE TABLE\nd3=# alter table cc1 inherit pp1;\nALTER TABLE\n\nMaybe the *real* fix here is to give up on this idea that they\ncan't be different?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Sep 2020 12:46:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "> On 29 Sep 2020, at 18:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> [ Pulling Daniel into this older thread seems like the cleanest way to\n> unify the two threads ]\n> \n> Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n>> If we have ALTER TABLE ONLY / DROP EXPRESSION update the attlocal\n>> column of children to true to fix the issue you raised, my proposed\n>> patch is not necessary. OTOH if we fix it by prohibiting the command,\n>> I guess we need both patches to fix the issues.\n> \n> Right, Peter already mentioned that we need a further pg_dump fix on\n> top of prohibiting the ONLY ... DROP EXPRESSION case.\n> \n> Bug #16622 [1] is another report of this same issue, and in that thread,\n> Daniel argues that the right fix is just\n> \n> +\t/*\n> +\t * Skip if the column isn't local to the table's definition as the attrdef\n> +\t * will be printed with the inheritance parent table definition\n> +\t */\n> +\tif (!tbinfo->attislocal[adnum - 1])\n> +\t\treturn;\n> \n> without the attgenerated clause that Masahiko-san proposes.\n> I think however that that's wrong. It is possible to have\n> a non-attislocal column that has its own default, because\n> you can do\n\nAh, that's the sequence I didn't think of. I agree with your assessment of\nthis being wrong. Thanks!\n\n> This does not cause child2.f1's attislocal property to become\n> true. Maybe it should have, but it's probably too late for\n> that; at least, pg_dump couldn't assume it's true in older\n> servers. \n\nDo you recall the rationale for it not being set to true? I didn't spot\nanything in the commit history. Intuitively it seems a tad odd.\n\n> Therefore, since we can't tell whether the default\n> is inherited or not, we'd better dump it.\n\nAgreed.\n\n> (I recall that pg_dump has a hack somewhere that checks for\n> textual equality of CHECK conditions to avoid dumping redundant\n> child copies. Maybe we could do something similar here.)\n\nUnless someone beats me to it I will take a stab at this to see what it would\nlook like.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 29 Sep 2020 22:01:09 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 29 Sep 2020, at 18:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This does not cause child2.f1's attislocal property to become\n>> true. Maybe it should have, but it's probably too late for\n>> that; at least, pg_dump couldn't assume it's true in older\n>> servers. \n\n> Do you recall the rationale for it not being set to true? I didn't spot\n> anything in the commit history. Intuitively it seems a tad odd.\n\nI'd bet the explanation is mostly that it didn't occur to anyone\nthat SET DEFAULT should do that. I'm not really proposing that it\nshould either. If we were to make any catalog changes in response\nto this, what I'd vote for is to add an \"inherited\" flag to\npg_attrdef. (I'm not quite sure if a bool would be sufficient,\nor if we'd need to go to the same extent as pg_attribute does,\nand have a bool plus an inheritance count.)\n\nOf course, that line of thought does not lead to anything\nback-patchable. But pg_dump would have to be prepared to cope\nwith the situation in older servers in any case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Sep 2020 16:17:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2020-09-29 18:37, Tom Lane wrote:\n> Unfortunately this has still got a problem: it will mishandle the case of\n> a child column that is GENERATED while its parent is not. Peter opined\n> way upthread that we should not allow that, but according to my testing\n> we do.\n\nDid I opine that? Commit 086ffddf3656fb3d24d9a73ce36cb1102e42cc90 \nexplicitly allowed that case. What we don't want is the other way around.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 2 Oct 2020 19:07:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2020-09-25 15:07, Peter Eisentraut wrote:\n> We could probably fix this by having ALTER TABLE ONLY / DROP EXPRESSION\n> update the attlocal column of direct children to true, to make the\n> catalog state look like something that can be restored. However, that's\n> a fair amount of complicated code, so for now I propose to just prohibit\n> this command, meaning you can't use ONLY in this command if there are\n> children. This is new in PG13, so this change would have very limited\n> impact in practice.\n\nWith the minor releases coming up, I have committed this patch and will \nwork on getting the remaining pg_dump issues fixed as well.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Nov 2020 15:53:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2020-08-27 13:30, Masahiko Sawada wrote:\n> This issue is not fixed yet. I've attached the updated version patch\n> and registered it to commit fest so as not to forget. Please review\n> it.\n\nA few fixes have been committed in this thread, basically to prevent \nsituations that shouldn't have been allowed.\n\nWhat's left is the originally reported issue that some parts of the \nregression test database are dumped incorrectly. The two proposed \npatches in their most recent versions are\n\nhttps://www.postgresql.org/message-id/attachment/107447/v3-0001-pg_dump-Fix-dumping-of-inherited-generated-column.patch \n(message \nhttps://www.postgresql.org/message-id/b1c831dd-d520-5e7f-0304-0eeed39c9996%402ndquadrant.com)\n\nand\n\nhttps://www.postgresql.org/message-id/attachment/113487/fix_gcolumn_dump_v2.patch \n(message \nhttps://www.postgresql.org/message-id/CA%2Bfd4k6pLzrZDQsdsxcS06AwGRf1DgwOw84sFq9oXNw%2B83nB1g%40mail.gmail.com)\n\nBoth of these result in the same change to the dump output. Both of \nthem have essentially the same idea. The first one adds the \nconditionals during the information gathering phase of pg_dump, the \nsecond one adds the conditionals during the output phase.\n\nAny further thoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Nov 2020 20:23:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 4:23 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-08-27 13:30, Masahiko Sawada wrote:\n> > This issue is not fixed yet. I've attached the updated version patch\n> > and registered it to commit fest so as not to forget. Please review\n> > it.\n>\n> A few fixes have been committed in this thread, basically to prevent\n> situations that shouldn't have been allowed.\n>\n> What's left is the originally reported issue that some parts of the\n> regression test database are dumped incorrectly. The two proposed\n> patches in their most recent versions are\n>\n> https://www.postgresql.org/message-id/attachment/107447/v3-0001-pg_dump-Fix-dumping-of-inherited-generated-column.patch\n> (message\n> https://www.postgresql.org/message-id/b1c831dd-d520-5e7f-0304-0eeed39c9996%402ndquadrant.com)\n>\n> and\n>\n> https://www.postgresql.org/message-id/attachment/113487/fix_gcolumn_dump_v2.patch\n> (message\n> https://www.postgresql.org/message-id/CA%2Bfd4k6pLzrZDQsdsxcS06AwGRf1DgwOw84sFq9oXNw%2B83nB1g%40mail.gmail.com)\n>\n> Both of these result in the same change to the dump output. Both of\n> them have essentially the same idea. The first one adds the\n> conditionals during the information gathering phase of pg_dump, the\n> second one adds the conditionals during the output phase.\n>\n> Any further thoughts?\n\nI think the first one is better than the second (mine) because it can\nsave the number of intermediate objects.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 6 Nov 2020 12:55:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2020-11-06 04:55, Masahiko Sawada wrote:\n>> Both of these result in the same change to the dump output. Both of\n>> them have essentially the same idea. The first one adds the\n>> conditionals during the information gathering phase of pg_dump, the\n>> second one adds the conditionals during the output phase.\n>>\n>> Any further thoughts?\n> I think the first one is better than the second (mine) because it can\n> save the number of intermediate objects.\n\nI was hoping to wrap this issue up this week, but I found more problems \nwith how these proposed changes interact with --binary-upgrade mode. I \nthink I need to formalize my findings into pg_dump test cases as a next \nstep. Then we can figure out what combination of tweaks will make them \nall work.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n",
"msg_date": "Mon, 9 Nov 2020 11:43:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 09.11.2020 13:43, Peter Eisentraut wrote:\n> On 2020-11-06 04:55, Masahiko Sawada wrote:\n>>> Both of these result in the same change to the dump output. Both of\n>>> them have essentially the same idea. The first one adds the\n>>> conditionals during the information gathering phase of pg_dump, the\n>>> second one adds the conditionals during the output phase.\n>>>\n>>> Any further thoughts?\n>> I think the first one is better than the second (mine) because it can\n>> save the number of intermediate objects.\n>\n> I was hoping to wrap this issue up this week, but I found more \n> problems with how these proposed changes interact with \n> --binary-upgrade mode. I think I need to formalize my findings into \n> pg_dump test cases as a next step. Then we can figure out what \n> combination of tweaks will make them all work.\n>\nI am moving this patch to the next CF, but it looks like the discussion \nis a bit stuck.\n\n\nPeter, can you please share your concerns about the interaction of the \npatch with --binary-upgrade mode? If you don't have time to write tests, \nyou can just describe problems.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 30 Nov 2020 18:37:22 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "I've had another go at this, and I've found a solution that appears to \naddress all the issues I'm aware of. It's all very similar to the \npreviously discussed patches. The main difference is that previous \npatches had attempted to use something like tbinfo->attislocal to \ndetermine whether a column was inherited, but that's not correct. This \npatch uses the existing logic in flagInhAttrs() to find whether there is \na matching parent column with a generation expression. I've added \npg_dump test cases here to check the different variations that the code \naddresses.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/",
"msg_date": "Fri, 29 Jan 2021 14:14:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I've had another go at this, and I've found a solution that appears to \n> address all the issues I'm aware of. It's all very similar to the \n> previously discussed patches. The main difference is that previous \n> patches had attempted to use something like tbinfo->attislocal to \n> determine whether a column was inherited, but that's not correct. This \n> patch uses the existing logic in flagInhAttrs() to find whether there is \n> a matching parent column with a generation expression. I've added \n> pg_dump test cases here to check the different variations that the code \n> addresses.\n\nThis is a clear improvement on the current situation, and given that\nthis issue is over a year old, I think we should push and back-patch\nthis in time for February's releases.\n\nHowever ... this doesn't solve all the cases noted in this thread.\nIn the first example I gave at [1],\n\nd3=# create table parent (f1 int default 2);\nCREATE TABLE\nd3=# create table child (f1 int default 3) inherits(parent);\nNOTICE: merging column \"f1\" with inherited definition\nCREATE TABLE\nd3=# create table child2() inherits(parent);\nCREATE TABLE\nd3=# alter table child2 alter column f1 set default 42;\nALTER TABLE\n\npg_dump still fails to restore child2.f1's non-inherited default.\nThat's probably a pre-existing problem, since it doesn't involve\nGENERATED at all, but we shouldn't forget about it.\n\nAlso, in the example from [2],\n\nd3=# create table pp1 (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\nCREATE TABLE\nd3=# create table cc1 (a int, b int GENERATED ALWAYS AS (a * 3) STORED);\nCREATE TABLE\nd3=# alter table cc1 inherit pp1;\nALTER TABLE\n\npg_dump now omits to dump cc1's generation expression, which seems\nstrictly worse than before. Admittedly, the backend likely ought to\nbe rejecting this scenario, but it doesn't do so today.\n\nNeither of these points seem like a reason to reject this patch,\nthey're just adjacent work that remains to be done.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/660925.1601397436%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/661371.1601398006%40sss.pgh.pa.us\n\n\n",
"msg_date": "Fri, 29 Jan 2021 11:41:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2021-01-29 17:41, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I've had another go at this, and I've found a solution that appears to\n>> address all the issues I'm aware of. It's all very similar to the\n>> previously discussed patches. The main difference is that previous\n>> patches had attempted to use something like tbinfo->attislocal to\n>> determine whether a column was inherited, but that's not correct. This\n>> patch uses the existing logic in flagInhAttrs() to find whether there is\n>> a matching parent column with a generation expression. I've added\n>> pg_dump test cases here to check the different variations that the code\n>> addresses.\n> \n> This is a clear improvement on the current situation, and given that\n> this issue is over a year old, I think we should push and back-patch\n> this in time for February's releases.\n\ndone\n\nI will continue working on the other issues that we have been discussing.\n\n> However ... this doesn't solve all the cases noted in this thread.\n> In the first example I gave at [1],\n> \n> d3=# create table parent (f1 int default 2);\n> CREATE TABLE\n> d3=# create table child (f1 int default 3) inherits(parent);\n> NOTICE: merging column \"f1\" with inherited definition\n> CREATE TABLE\n> d3=# create table child2() inherits(parent);\n> CREATE TABLE\n> d3=# alter table child2 alter column f1 set default 42;\n> ALTER TABLE\n> \n> pg_dump still fails to restore child2.f1's non-inherited default.\n> That's probably a pre-existing problem, since it doesn't involve\n> GENERATED at all, but we shouldn't forget about it.\n> \n> Also, in the example from [2],\n> \n> d3=# create table pp1 (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\n> CREATE TABLE\n> d3=# create table cc1 (a int, b int GENERATED ALWAYS AS (a * 3) STORED);\n> CREATE TABLE\n> d3=# alter table cc1 inherit pp1;\n> ALTER TABLE\n> \n> pg_dump now omits to dump cc1's generation expression, which seems\n> strictly worse than before. Admittedly, the backend likely ought to\n> be rejecting this scenario, but it doesn't do so today.\n> \n> Neither of these points seem like a reason to reject this patch,\n> they're just adjacent work that remains to be done.\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/660925.1601397436%40sss.pgh.pa.us\n> [2] https://www.postgresql.org/message-id/661371.1601398006%40sss.pgh.pa.us\n> \n> \n\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n",
"msg_date": "Wed, 3 Feb 2021 13:04:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2021-01-29 17:41, Tom Lane wrote:\n> However ... this doesn't solve all the cases noted in this thread.\n> In the first example I gave at [1],\n> \n> d3=# create table parent (f1 int default 2);\n> CREATE TABLE\n> d3=# create table child (f1 int default 3) inherits(parent);\n> NOTICE: merging column \"f1\" with inherited definition\n> CREATE TABLE\n> d3=# create table child2() inherits(parent);\n> CREATE TABLE\n> d3=# alter table child2 alter column f1 set default 42;\n> ALTER TABLE\n> \n> pg_dump still fails to restore child2.f1's non-inherited default.\n> That's probably a pre-existing problem, since it doesn't involve\n> GENERATED at all, but we shouldn't forget about it.\n\n> [1] https://www.postgresql.org/message-id/660925.1601397436%40sss.pgh.pa.us\n\nI can't tell what the problem is in this example. I tried with PG11, \n12, and master, and the schema dump comes out with those same four \ncommands and they restore correctly AFAICT.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n",
"msg_date": "Wed, 3 Feb 2021 16:24:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2021-01-29 17:41, Tom Lane wrote:\n>> However ... this doesn't solve all the cases noted in this thread.\n>> In the first example I gave at [1],\n>> d3=# create table parent (f1 int default 2);\n>> CREATE TABLE\n>> d3=# create table child (f1 int default 3) inherits(parent);\n>> NOTICE: merging column \"f1\" with inherited definition\n>> CREATE TABLE\n>> d3=# create table child2() inherits(parent);\n>> CREATE TABLE\n>> d3=# alter table child2 alter column f1 set default 42;\n>> ALTER TABLE\n>> \n>> pg_dump still fails to restore child2.f1's non-inherited default.\n\n> I can't tell what the problem is in this example. I tried with PG11, \n> 12, and master, and the schema dump comes out with those same four \n> commands and they restore correctly AFAICT.\n\nOh! Trying it now, I see that the child2 default does get restored\nas a \"separate default\" object:\n\nALTER TABLE ONLY public.child2 ALTER COLUMN f1 SET DEFAULT 42;\n\nThis is a bit weird, because you'd think it would be handled\nthe same as the other child's default, but it isn't; that\none comes out as\n\nCREATE TABLE public.child (\n f1 integer DEFAULT 3\n)\nINHERITS (public.parent);\n\nwhile child2 looks like\n\nCREATE TABLE public.child2 (\n)\nINHERITS (public.parent);\n\n\nI now suspect that I'd seen this dump of \"child2\" and missed the later\nALTER. So no bug here, just pilot error. Sorry for the noise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Feb 2021 10:31:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2021-01-29 17:41, Tom Lane wrote:\n> Also, in the example from [2],\n> \n> d3=# create table pp1 (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\n> CREATE TABLE\n> d3=# create table cc1 (a int, b int GENERATED ALWAYS AS (a * 3) STORED);\n> CREATE TABLE\n> d3=# alter table cc1 inherit pp1;\n> ALTER TABLE\n> \n> pg_dump now omits to dump cc1's generation expression, which seems\n> strictly worse than before. Admittedly, the backend likely ought to\n> be rejecting this scenario, but it doesn't do so today.\n> \n> [2]https://www.postgresql.org/message-id/661371.1601398006%40sss.pgh.pa.us\n\nHere is a WIP patch to address this. Probably needs another look for \ncolumn number mapping and all the usual stuff, but the basic idea should \nbe okay.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/",
"msg_date": "Wed, 3 Feb 2021 20:18:33 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "Hi,\n+ if (attribute->attgenerated && !childatt->attgenerated)\n+ ereport(ERROR,\n...\n+ if (attribute->attgenerated && childatt->attgenerated)\n+ {\n\nLooks like for the second if statement,\nchecking attribute->attgenerated should be enough (due to the check from\nthe first if statement).\n\nCheers\n\nOn Wed, Feb 3, 2021 at 11:18 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2021-01-29 17:41, Tom Lane wrote:\n> > Also, in the example from [2],\n> >\n> > d3=# create table pp1 (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\n> > CREATE TABLE\n> > d3=# create table cc1 (a int, b int GENERATED ALWAYS AS (a * 3) STORED);\n> > CREATE TABLE\n> > d3=# alter table cc1 inherit pp1;\n> > ALTER TABLE\n> >\n> > pg_dump now omits to dump cc1's generation expression, which seems\n> > strictly worse than before. Admittedly, the backend likely ought to\n> > be rejecting this scenario, but it doesn't do so today.\n> >\n> > [2]\n> https://www.postgresql.org/message-id/661371.1601398006%40sss.pgh.pa.us\n>\n> Here is a WIP patch to address this. Probably needs another look for\n> column number mapping and all the usual stuff, but the basic idea should\n> be okay.\n>\n> --\n> Peter Eisentraut\n> 2ndQuadrant, an EDB company\n> https://www.2ndquadrant.com/\n>\n\nHi,+ if (attribute->attgenerated && !childatt->attgenerated)+ ereport(ERROR,...+ if (attribute->attgenerated && childatt->attgenerated)+ {Looks like for the second if statement, checking attribute->attgenerated should be enough (due to the check from the first if statement).CheersOn Wed, Feb 3, 2021 at 11:18 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2021-01-29 17:41, Tom Lane wrote:\n> Also, in the example from [2],\n> \n> d3=# create table pp1 (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\n> CREATE TABLE\n> d3=# create table cc1 (a int, b int GENERATED ALWAYS AS (a * 3) STORED);\n> CREATE TABLE\n> d3=# alter table cc1 inherit pp1;\n> ALTER TABLE\n> \n> pg_dump now omits to dump cc1's generation expression, which seems\n> strictly worse than before. Admittedly, the backend likely ought to\n> be rejecting this scenario, but it doesn't do so today.\n> \n> [2]https://www.postgresql.org/message-id/661371.1601398006%40sss.pgh.pa.us\n\nHere is a WIP patch to address this. Probably needs another look for \ncolumn number mapping and all the usual stuff, but the basic idea should \nbe okay.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/",
"msg_date": "Wed, 3 Feb 2021 16:17:02 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 2021-02-04 01:17, Zhihong Yu wrote:\n> Hi,\n> + if (attribute->attgenerated && !childatt->attgenerated)\n> + ereport(ERROR,\n> ...\n> + if (attribute->attgenerated && childatt->attgenerated)\n> + {\n> \n> Looks like for the second if statement, \n> checking attribute->attgenerated should be enough (due to the check from \n> the first if statement).\n\nThanks for taking a look. I figured the way I wrote it makes it easier \nto move the code around or insert other code in the future and doesn't \nmake it so tightly coupled.\n\nAnyway, I figured out how to take account of generation expressions with \ndifferent column orders. I used the same approach that we use for check \nconstraints. The attached patch is good to go from my perspective.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/",
"msg_date": "Fri, 5 Feb 2021 15:18:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 05.02.21 15:18, Peter Eisentraut wrote:\n> Anyway, I figured out how to take account of generation expressions with \n> different column orders. I used the same approach that we use for check \n> constraints. The attached patch is good to go from my perspective.\n\nDusting this off ... this patch should go into the next minor releases. \nThe attached patch is for master but backpatches without manual \nintervention to PG13 and PG12.",
"msg_date": "Mon, 26 Apr 2021 14:10:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
},
{
"msg_contents": "On 26.04.21 14:10, Peter Eisentraut wrote:\n> On 05.02.21 15:18, Peter Eisentraut wrote:\n>> Anyway, I figured out how to take account of generation expressions \n>> with different column orders. I used the same approach that we use \n>> for check constraints. The attached patch is good to go from my \n>> perspective.\n> \n> Dusting this off ... this patch should go into the next minor releases. \n> The attached patch is for master but backpatches without manual \n> intervention to PG13 and PG12.\n\ncommitted\n\n\n",
"msg_date": "Tue, 4 May 2021 12:27:03 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Dumping/restoring fails on inherited generated column"
}
] |
[
{
"msg_contents": "I just proposed in\nhttps://postgr.es/m/0191204143715.GA17312@alvherre.pgsql the addition of\nstrndup() to our src/port.\n\nI think this should be pretty uncontroversial, but wanted to give a\nheads-up outside that thread. I attach the patch here for completeness.\n\n-- \n�lvaro Herrera http://www.twitter.com/alvherre",
"msg_date": "Wed, 4 Dec 2019 11:40:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "adding strndup"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I just proposed in\n> https://postgr.es/m/0191204143715.GA17312@alvherre.pgsql the addition of\n> strndup() to our src/port.\n> I think this should be pretty uncontroversial, but wanted to give a\n> heads-up outside that thread. I attach the patch here for completeness.\n\nGrepping, I notice that ecpg has an ecpg_strndup; should that be\nreplaced with this?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 10:54:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: adding strndup"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-04 11:40:21 -0300, Alvaro Herrera wrote:\n> I just proposed in\n> https://postgr.es/m/0191204143715.GA17312@alvherre.pgsql the addition of\n> strndup() to our src/port.\n> \n> I think this should be pretty uncontroversial, but wanted to give a\n> heads-up outside that thread. I attach the patch here for completeness.\n\nWell, I personally think it's a bad idea to add further implementations\nfor functions that are in standar libraries on some systems. Especially,\nbut not exclusively, when made available for frontend code, where it's\nnot unlikely that there might be other applications having their own\nimplementations of strndup/whatever.\n\nBesides that reason, I think AC_REPLACE_FUNCS is just a bad mechanism,\nthat yields fragmented source code and needs to implemented differently\nfor windows. The code additionally often will also be badly optimized\nin general, due to tiny translation units without relevant functions\nhaving knoweldge about each other.\n\nI'd just provide pnstrdup() in the frontend, without adding strndup().\n\nI also see no point in adding both pnstrdup() and pg_strndup(). I'm fine\nwith moving towards pg_strndup(), but then we just ought to remove\npnstrdup().\n\n- Andres\n\n\n",
"msg_date": "Wed, 4 Dec 2019 10:04:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: adding strndup"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-12-04 11:40:21 -0300, Alvaro Herrera wrote:\n>> I think this should be pretty uncontroversial, but wanted to give a\n>> heads-up outside that thread. I attach the patch here for completeness.\n\n> I'd just provide pnstrdup() in the frontend, without adding strndup().\n\n+1 --- seems like a bunch more mechanism than is warranted. Let's\njust open-code it in pnstrdup. We can rely on strnlen, since that's\nalready supported, and there's not much more there beyond that.\n\n> I also see no point in adding both pnstrdup() and pg_strndup(). I'm fine\n> with moving towards pg_strndup(), but then we just ought to remove\n> pnstrdup().\n\nThere's a fair number of uses of pnstrdup in the backend. While it\nwouldn't be too painful to rename them, I'm not sure I see the point.\n(What I'd really argue for, if we did rename, is \"pstrndup\".)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 13:58:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: adding strndup"
},
{
"msg_contents": "On 2019-Dec-04, Tom Lane wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-12-04 11:40:21 -0300, Alvaro Herrera wrote:\n> >> I think this should be pretty uncontroversial, but wanted to give a\n> >> heads-up outside that thread. I attach the patch here for completeness.\n> \n> > I'd just provide pnstrdup() in the frontend, without adding strndup().\n> \n> +1 --- seems like a bunch more mechanism than is warranted. Let's\n> just open-code it in pnstrdup. We can rely on strnlen, since that's\n> already supported, and there's not much more there beyond that.\n\nI can get behind that ... it makes the patch a lot smaller. I'm gonna\nsend an updated version in a jiffy.\n\n> > I also see no point in adding both pnstrdup() and pg_strndup(). I'm fine\n> > with moving towards pg_strndup(), but then we just ought to remove\n> > pnstrdup().\n> \n> There's a fair number of uses of pnstrdup in the backend. While it\n> wouldn't be too painful to rename them, I'm not sure I see the point.\n> (What I'd really argue for, if we did rename, is \"pstrndup\".)\n\n*shrug* I also looked for pstrndup() first. And Peter E also in\nhttps://postgr.es/m/1339713732.11971.79.camel@vanquo.pezone.net\nsubmitted an implementation of pstrndup(). I'm not opposed to renaming\nit, but I hesitate to do it at the same time as putting it in pgport.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 16:05:01 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: adding strndup"
},
{
"msg_contents": "On 2019-Dec-04, Alvaro Herrera wrote:\n\n> On 2019-Dec-04, Tom Lane wrote:\n> \n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2019-12-04 11:40:21 -0300, Alvaro Herrera wrote:\n> > >> I think this should be pretty uncontroversial, but wanted to give a\n> > >> heads-up outside that thread. I attach the patch here for completeness.\n> > \n> > > I'd just provide pnstrdup() in the frontend, without adding strndup().\n> > \n> > +1 --- seems like a bunch more mechanism than is warranted. Let's\n> > just open-code it in pnstrdup. We can rely on strnlen, since that's\n> > already supported, and there's not much more there beyond that.\n> \n> I can get behind that ... it makes the patch a lot smaller.\n\nHere it is.\n\nI noticed that ECPG's copy was setting errno. I had forgot to do that\nin my previous patch, but on second look, malloc failure already sets\nit, so doing it again is pointless.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Dec 2019 19:01:46 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: adding strndup"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I can get behind that ... it makes the patch a lot smaller.\n\n> Here it is.\n\nLGTM.\n\n> I noticed that ECPG's copy was setting errno. I had forgot to do that\n> in my previous patch, but on second look, malloc failure already sets\n> it, so doing it again is pointless.\n\nRight.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 17:17:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: adding strndup"
}
] |
[
{
"msg_contents": "In the previous discussions on backtrace support, some people asked for \nbacktraces in more situations. Here is a patch that prints backtraces \non SIGABRT, SIGBUS, and SIGSEGV signals. SIGABRT includes assertions \nand elog(PANIC).\n\nDo signals work like this on Windows? Do we need special EXEC_BACKEND \nsupport?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Dec 2019 20:45:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "more backtraces"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-04 20:45:25 +0100, Peter Eisentraut wrote:\n> In the previous discussions on backtrace support, some people asked for\n> backtraces in more situations. Here is a patch that prints backtraces on\n> SIGABRT, SIGBUS, and SIGSEGV signals. SIGABRT includes assertions and\n> elog(PANIC).\n\nHm. Can we really do that somewhat reliably like this? I'd suspect that\nthere'll be some oddities e.g. for stack overflows if done this way. To\nmy knowledge it's not a good idea to intercept SIGBUS/SIGSEGV without\nusing a separate signal stack (cf. sigaltstack) - but using a separate\nstack could also make it harder to determine a correct backtrace?\n\nIt'd be bad if the addition of backtraces for SEGV/BUS suddenly made it\nharder to attach a debugger and getting useful results. Even\ndisregarding the previous concerns, we'll get less useful debugger\ninteractions due to this, e.g. for things like null pointer derefs,\nright?\n\nDoing this for SIGABRT seems like a more clearly good case - by that\npoint we're already removed a few frames from the triggering code\nanyway. So debugging experience won't suffer much. And I don't think\nthere's a corresponding issue with the stack potentially being\ncorrupted / not large enough.\n\n- Andres\n\n\n",
"msg_date": "Wed, 4 Dec 2019 11:59:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: more backtraces"
},
{
"msg_contents": "On 2019-12-04 20:59, Andres Freund wrote:\n> On 2019-12-04 20:45:25 +0100, Peter Eisentraut wrote:\n>> In the previous discussions on backtrace support, some people asked for\n>> backtraces in more situations. Here is a patch that prints backtraces on\n>> SIGABRT, SIGBUS, and SIGSEGV signals. SIGABRT includes assertions and\n>> elog(PANIC).\n> \n> Hm. Can we really do that somewhat reliably like this?\n\nI've seen reputable programs that do all kinds of things in SIGSEGV \nhandlers, including running user-defined programs, without taking any \nspecial precautions. So it seems possible in general.\n\n> I'd suspect that\n> there'll be some oddities e.g. for stack overflows if done this way. To\n> my knowledge it's not a good idea to intercept SIGBUS/SIGSEGV without\n> using a separate signal stack (cf. sigaltstack) - but using a separate\n> stack could also make it harder to determine a correct backtrace?\n\nDidn't know about that, but seems useful. I'll look into it.\n\n> It'd be bad if the addition of backtraces for SEGV/BUS suddenly made it\n> harder to attach a debugger and getting useful results. Even\n> disregarding the previous concerns, we'll get less useful debugger\n> interactions due to this, e.g. for things like null pointer derefs,\n> right?\n\nThe backtrace and level of detail jumping around between frames I get in \nlldb looks the same as without this. But it might depend.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 21:31:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: more backtraces"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It'd be bad if the addition of backtraces for SEGV/BUS suddenly made it\n> harder to attach a debugger and getting useful results.\n\nYeah. TBH, I'm not sure I want this, at least not in debug builds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 16:34:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: more backtraces"
},
{
"msg_contents": "On 2019-12-04 22:34, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> It'd be bad if the addition of backtraces for SEGV/BUS suddenly made it\n>> harder to attach a debugger and getting useful results.\n> \n> Yeah. TBH, I'm not sure I want this, at least not in debug builds.\n\nI understand that the SEGV/BUS thing can be a bit scary. We can skip it.\n\nAre people interested in backtraces on abort()? That was asked for in \nan earlier thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Dec 2019 13:26:43 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: more backtraces"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 7:26 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-12-04 22:34, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> It'd be bad if the addition of backtraces for SEGV/BUS suddenly made it\n> >> harder to attach a debugger and getting useful results.\n> >\n> > Yeah. TBH, I'm not sure I want this, at least not in debug builds.\n>\n> I understand that the SEGV/BUS thing can be a bit scary. We can skip it.\n>\n> Are people interested in backtraces on abort()? That was asked for in\n> an earlier thread.\n\nI mean, I think backtraces are great, and we should have more of them.\nIt's possible that trying to do it in certain cases will cause\nproblems, but we could back off those cases as we find them, or maybe\ntry to work around them using sigaltstack(), or maybe back it off in\ndebug builds.\n\nIt would make life a lot easier for me if I never had to explain to a\ncustomer (1) how to install gdb or (2) that they needed to get $BOSS\nto approve installation of development tools on production systems. I\nwould hate to see us shy away from improvements that might reduce the\nneed for such conversations on the theory that bad stuff *might*\nhappen.\n\nIn my experience, the importance of having a stack trace in the log is\ngreatest for a segmentation fault, because otherwise you have no\nindication whatsoever of where the problem happened. Having the query\ntext has been a boon, but it's still not a lot to go on unless the\nsame query crashes every time. In other situations, like a PANIC,\nAssertion failure, or (and this is a big one) non-descriptive error\nmessage (cache look failed for thingy %u) a backtrace is sometimes\nreally helpful as well. You don't *always* need it, but you *often*\nneed it.\n\nIt is absolutely important that we don't break debuggability in the\nservice of getting more stack traces. At the same time, there are a\nlot more PostgreSQL users out there than there are PostgreSQL\ndevelopers, and a lot of those people are running non-cassert,\nnon-debug builds. Being able to get debugging information from\nfailures that happen on those installations that enables us to fix\nthings without having to go through a time-consuming process of\nguesswork and attempted reproduction is really valuable. A stack trace\ncan turn a lengthy nightmare into a quick fix.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 14 Dec 2019 22:38:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: more backtraces"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Dec 13, 2019 at 7:26 AM Peter Eisentraut\n>> Are people interested in backtraces on abort()? That was asked for in\n>> an earlier thread.\n\nFWIW, I don't have too much of an opinion about abort() yet.\nAren't we covering most of the possible cases for that already?\nI don't think that direct abort() calls are considered good style\nin the backend; it'd mostly get reached via Assert or PANIC.\n\n> It would make life a lot easier for me if I never had to explain to a\n> customer (1) how to install gdb or (2) that they needed to get $BOSS\n> to approve installation of development tools on production systems.\n\nSure, but this facility is not going to have that end result, because\nthe output just isn't detailed enough. If it were, I'd be more interested\nin taking risks to get the output. But as it stands, we're going to\nneed more information in a large fraction of cases, so I'm dubious\nabout doing anything that might actually interfere with collecting\nsuch information.\n\n> Being able to get debugging information from\n> failures that happen on those installations that enables us to fix\n> things without having to go through a time-consuming process of\n> guesswork and attempted reproduction is really valuable. A stack trace\n> can turn a lengthy nightmare into a quick fix.\n\nI think you are supposing that these traces will be as useful as gdb\ntraces. They won't. In particular, where a gdb trace will almost\nalways localize the problem to a line of C code, with these you're\nquite lucky if you can even localize to a specific function. That\nissue is mitigated for the existing use-cases by the fact that there's\nalso a reported error message or assertion condition, so you can use\nthat to narrow down the trap site. But that won't help for SIGSEGV.\n\nI think that the most useful next steps would involve trying to get\nbetter printouts from the cases this code already traps, rather than\nextending it to more cases. Maybe eventually we'll feel that this\ncode is useful and reliable enough to justify trying to insert it\ninto SIGSEGV cases; but we're not there today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Dec 2019 11:06:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: more backtraces"
},
{
"msg_contents": "On 2019-Dec-15, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n\n> > Being able to get debugging information from\n> > failures that happen on those installations that enables us to fix\n> > things without having to go through a time-consuming process of\n> > guesswork and attempted reproduction is really valuable. A stack trace\n> > can turn a lengthy nightmare into a quick fix.\n> \n> I think you are supposing that these traces will be as useful as gdb\n> traces. They won't. In particular, where a gdb trace will almost\n> always localize the problem to a line of C code, with these you're\n> quite lucky if you can even localize to a specific function.\n\nThat's already been my experience :-(\n\n> I think that the most useful next steps would involve trying to get\n> better printouts from the cases this code already traps,\n\n+1\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 15 Dec 2019 18:28:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: more backtraces"
}
] |
[
{
"msg_contents": "Until PostgreSQL 9.1, it was possible to run ALTER TABLE ... SET \nSTATISTICS without allow_system_table_mods. In PostgreSQL 9.2 and \nlater, this no longer works. This change was apparently accidental. (I \ngave up after a while trying to bisect it exactly, but probably \nsomething related to 1489e2f26a4c0318938b3085f50976512f321d84.)\n\n(It didn't work on mapped relations, so it wasn't all roses.)\n\nA comment in ATPrepSetStatistics() still makes references to this being \npossible. I propose to remove this comment.\n\nThere was some discussion about (re-)allowing this and some other \ncommands like this, but after the recent changes to make \nallow_system_table_mods easier to use, I think this has less urgency, so \nI'd rather get the comment correct in the meantime.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Dec 2019 23:56:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "about allow_system_table_mods and SET STATISTICS"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Until PostgreSQL 9.1, it was possible to run ALTER TABLE ... SET \n> STATISTICS without allow_system_table_mods. In PostgreSQL 9.2 and \n> later, this no longer works. This change was apparently accidental. (I \n> gave up after a while trying to bisect it exactly, but probably \n> something related to 1489e2f26a4c0318938b3085f50976512f321d84.)\n> (It didn't work on mapped relations, so it wasn't all roses.)\n\n> A comment in ATPrepSetStatistics() still makes references to this being \n> possible. I propose to remove this comment.\n> There was some discussion about (re-)allowing this and some other \n> commands like this, but after the recent changes to make \n> allow_system_table_mods easier to use, I think this has less urgency, so \n> I'd rather get the comment correct in the meantime.\n\nSeems reasonable. The argument for making this an exception to\nallow_system_table_mods was always more about expediency than logical\ncleanliness. After the recent changes I think it's okay to remove the\nspecial case (especially since nobody has griped about it being broken).\n\nHowever ... if we're not going to have that special case, couldn't\nwe get rid of the whole business of having a special permissions test?\nWhat is it that ATSimplePermissions can't handle here? The business\nabout requiring a colName certainly doesn't need to be done before the\nownership check (in fact, it could be left to execution, so we don't need\na prep function at all anymore).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Dec 2019 18:16:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: about allow_system_table_mods and SET STATISTICS"
},
{
"msg_contents": "On 2019-12-05 00:16, Tom Lane wrote:\n> Seems reasonable. The argument for making this an exception to\n> allow_system_table_mods was always more about expediency than logical\n> cleanliness. After the recent changes I think it's okay to remove the\n> special case (especially since nobody has griped about it being broken).\n> \n> However ... if we're not going to have that special case, couldn't\n> we get rid of the whole business of having a special permissions test?\n> What is it that ATSimplePermissions can't handle here? The business\n> about requiring a colName certainly doesn't need to be done before the\n> ownership check (in fact, it could be left to execution, so we don't need\n> a prep function at all anymore).\n\nGood point. Done in the attached patch.\n\n(If someone wanted to revive the original functionality, it would \nnowadays probably be easier to add a flag ATT_SYSTEM_TABLE to \nATSimplePermissions(), so there is really no reason to keep the old \nfunction separate.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 10 Dec 2019 13:41:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: about allow_system_table_mods and SET STATISTICS"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Good point. Done in the attached patch.\n> (If someone wanted to revive the original functionality, it would \n> nowadays probably be easier to add a flag ATT_SYSTEM_TABLE to \n> ATSimplePermissions(), so there is really no reason to keep the old \n> function separate.)\n\nYeah --- that way, the behavior would also be conveniently available\nto other ALTER TABLE subcommands.\n\nThis patch looks good, with one trivial nitpick: it looks a bit odd\nto insert the relkind check into ATExecSetStatistics between the\nassignment of \"newtarget\" and the validity check for same. I'd\nput it either before or after that whole stanza. Just a cosmetic\nthing though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Dec 2019 11:23:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: about allow_system_table_mods and SET STATISTICS"
},
{
"msg_contents": "On 2019-12-10 17:23, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Good point. Done in the attached patch.\n>> (If someone wanted to revive the original functionality, it would\n>> nowadays probably be easier to add a flag ATT_SYSTEM_TABLE to\n>> ATSimplePermissions(), so there is really no reason to keep the old\n>> function separate.)\n> \n> Yeah --- that way, the behavior would also be conveniently available\n> to other ALTER TABLE subcommands.\n> \n> This patch looks good, with one trivial nitpick: it looks a bit odd\n> to insert the relkind check into ATExecSetStatistics between the\n> assignment of \"newtarget\" and the validity check for same. I'd\n> put it either before or after that whole stanza. Just a cosmetic\n> thing though.\n\nCommitted that way. Thanks.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Dec 2019 09:17:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: about allow_system_table_mods and SET STATISTICS"
}
] |
[
{
"msg_contents": "Hi all,\n\nSo, I have been looking at what we could clean up by removing support\nfor OpenSSL 0.9.8 and 1.0.0. Here are my notes:\n1) SSL_get_current_compression exists before 0.9.8, and we don't\nactually make use of its configure check. So I think that it could\njust be removed, as per patch 0001.\n2) SSL_clear_options exists since 0.9.8, so we should not even need the\nconfigure checks. Still, it is defined as a macro from 0.9.8 to\n1.0.2, and then it has switched to a function in 1.1.0, so we fail to\ndetect it on past versions of OpenSSL (LibreSSL has forked at the\npoint of 1.0.1g, so it uses only a macro). There is an extra take\nthough. Daniel has mentioned that here:\nhttps://www.postgresql.org/message-id/98F7F99E-1129-41D8-B86B-FE3B1E286881@yesql.se\nNote also that a364dfa has also added a tweak in fe-secure-openssl.c\nfor cases where we don't have SSL_clear_options(). This refers to\nNetBSD 5.1. Peter, do you recall which version of LibreSSL was\ninvolved here? From a lookup at the code of LibreSSL, the function\nhas always been around as a macro. Anyway, 0002 is more subject to\ndiscussions regarding this last point.\n\nThen comes the actual changes across the major versions:\n1) SSL_CTX_set_options, which has been added in 0.9.8f, so this could\nget removed in be-secure-openssl.c.\n2) These functions are new as of 1.0.2:\nX509_get_signature_nid\n3) These functions are new as of 1.1.0:\n- SSL_CTX_set_min_proto_version, SSL_CTX_set_max_proto_version (still\nfor the fallback functions we have it sounds better to keep the extra\nchecks on the TLSvXX definitions.)\n- BIO_meth_new\n- BIO_get_data\n- OPENSSL_init_ssl\n- ASN1_STRING_get0_data\nFrom the point of view of the code, the cleanup is not actually that\namazing I am afraid, a jump directly to 1.1.0 would remove much more\nbecause the breakages were wider when we integrated it. Anyway, those\ncleanups are part of 0003. I thought that this would have resulted in\nmore cleanup :(\n\nI think that 0001 is a fix we need to do, 0002 is debatable still\nLibreSSL should support it and we fail to detect SSL_clear_options\nproperly, and 0003 does not really much additional value. Or we put\ninto the balance for 0003 the argument that we can use TLSv1.2 for all\nconfigurations, which is safer but we have the configuration to\nenforce it.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 5 Dec 2019 17:32:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "> On 5 Dec 2019, at 09:32, Michael Paquier <michael@paquier.xyz> wrote:\n\n> From the point of view of the code, the cleanup is not actually that\n> amazing I am afraid, a jump directly to 1.1.0 would remove much more\n> because the breakages were wider when we integrated it. Anyway, those\n> cleanups are part of 0003. I thought that this would have resulted in\n> more cleanup :(\n\nWhile expected, it's still disappointing. Until we can drop 1.0.2 there isn't\ntoo much to gain, and that will likely be reasonably far into the future given\nthat it's the final version that can run the FIPS module.\n\n> I think that 0001 is a fix we need to do\n\n+1\n\n> 0002 is debatable still LibreSSL should support it and we fail to detect\n> SSL_clear_options properly,\n\n\nLibreSSL has SSL_clear_options in all versions, as does OpenSSL even at the\nlevel of support we have as of now. The only issue with 0002 is support for\nolder NetBSD releases, as is documented in the comment and commit message.\n\nNetBSD 5.x shipped a custom OpenSSL identified as 0.9.9, which is a version of\ntheir own invention. NetBSD 6.0 (which shipped in October 2012) ships 1.0.1u,\nwhich has SSL_clear_options as well as SSL_OP_NO_COMPRESSION. So, this patch\nis not really debateable if we are dropping support for 0.9.8 and 1.0.0.\n \nopossum is the only animal in the buildfarm on NetBSD 5, and it has been silent\nfor close to a year now (coypu is on NetBSD 8). Requiring opossum to build\nwithout OpenSSL (if/when) it comes back seems a reasonable ask. NetBSD 5.x has\nbeen EOL for quite some time too.\n\n+1 on applying this instead of trying to fix the autoconf check.\n\n> 0003 does not really much additional value. Or we put\n> into the balance for 0003 the argument that we can use TLSv1.2 for all\n> configurations, which is safer but we have the configuration to\n> enforce it.\n\nI think the TLSv1.2 argument is the most compelling one, the changes are a wash\nin terms of code maintainability. Raising the minimum supported version\ndoesn't really have any downsides though, and should be quite uncontroversial\n(and as noted in the other thread, prariedog and gaur are ready for the change\nso buildfarm breakage should be limited to an animal that doesnt build\nanymore). Overall, +1.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 5 Dec 2019 12:09:04 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 5 Dec 2019, at 09:32, Michael Paquier <michael@paquier.xyz> wrote:\n>> From the point of view of the code, the cleanup is not actually that\n>> amazing I am afraid, a jump directly to 1.1.0 would remove much more\n>> because the breakages were wider when we integrated it. Anyway, those\n>> cleanups are part of 0003. I thought that this would have resulted in\n>> more cleanup :(\n\n> While expected, it's still disappointing. Until we can drop 1.0.2 there isn't\n> too much to gain, and that will likely be reasonably far into the future given\n> that it's the final version that can run the FIPS module.\n\nYeah; also as mentioned in the other thread, 1.0.1 is still in use\nin RHEL 6, so it's hard to consider dropping that for at least another\nyear. I concur with the conclusion that we can stop worrying about\nNetBSD 5, though.\n\nI see nothing to object to in this patch set.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Dec 2019 10:38:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 10:38:55AM -0500, Tom Lane wrote:\n> Yeah; also as mentioned in the other thread, 1.0.1 is still in use\n> in RHEL 6, so it's hard to consider dropping that for at least another\n> year. I concur with the conclusion that we can stop worrying about\n> NetBSD 5, though.\n\nThanks. Another argument in favor of dropping 1.0.0 and 0.9.8 is that\nit is a pain to check an OpenSSL patch across that many versions,\nmultiplied by the number of Postgres branches in need of patching :)\n\n> I see nothing to object to in this patch set.\n\nI have applied 0001 on HEAD for now as that's a simple cleanup (I\nwould not backpatch that though). For 0002, I would prefer be sure\nthat we reach the right conclusion. My take is to:\n1) Apply 0002 only on HEAD to remove the check for clear_options.\n2) Apply something like Daniel's patch in [1] for REL_12_STABLE and\nREL_11_STABLE as we care about this routine since e3bdb2d to not mess\nup with past versions of NetBSD which worked previously on our\nreleased branches. (The patch looks fine at quick glance and I\nhaven't tested it yet)\n\n[1]: https://www.postgresql.org/message-id/3C636E88-44C7-40C6-ABA3-1B236E0A74DE@yesql.se\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 10:33:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 10:33:23AM +0900, Michael Paquier wrote:\n> Thanks. Another argument in favor of dropping 1.0.0 and 0.9.8 is that\n> it is a pain to check an OpenSSL patch across that many versions,\n> multiplied by the number of Postgres branches in need of patching :)\n\nI have done nothing for 0003 yet. Let's wait a bit and see if others\nhave more arguments in favor of it or not.\n\n> I have applied 0001 on HEAD for now as that's a simple cleanup (I\n> would not backpatch that though). For 0002, I would prefer be sure\n> that we reach the right conclusion. My take is to:\n> 1) Apply 0002 only on HEAD to remove the check for clear_options.\n> 2) Apply something like Daniel's patch in [1] for REL_12_STABLE and\n> REL_11_STABLE as we care about this routine since e3bdb2d to not mess\n> up with past versions of NetBSD which worked previously on our\n> released branches. (The patch looks fine at quick glance and I\n> haven't tested it yet)\n\n0002 is now applied, and did things as described in the above\nparagraph.\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 15:22:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "> On 6 Dec 2019, at 02:33, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Another argument in favor of dropping 1.0.0 and 0.9.8 is that\n> it is a pain to check an OpenSSL patch across that many versions,\n> multiplied by the number of Postgres branches in need of patching :)\n\nThat is indeed a very good argument.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 6 Dec 2019 09:21:55 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 09:21:55AM +0100, Daniel Gustafsson wrote:\n> On 6 Dec 2019, at 02:33, Michael Paquier <michael@paquier.xyz> wrote:\n>> Another argument in favor of dropping 1.0.0 and 0.9.8 is that\n>> it is a pain to check an OpenSSL patch across that many versions,\n>> multiplied by the number of Postgres branches in need of patching :)\n> \n> That is indeed a very good argument.\n\nSorry for letting this thread down for a couple of weeks, but I was\nhesitating to apply the last patch of the series as the cleanup of the\ncode related to OpenSSL 0.9.8 and 1.0.0 is not that much. An extra\nargument in favor of the removal is that this can allow more shaving\nof past Python versions, as proposed by Peter here:\nhttps://www.postgresql.org/message-id/98b69261-298c-13d2-f34d-836fd9c29b21@2ndquadrant.com\n\nSo, let's do it. I don't think that I'll be able to do anything this\nweek about it, but that should be fine by the end of next week. Are\nthere any objections or comments?\n\nFor now, please note that I have added an entry in the CF app:\nhttps://commitfest.postgresql.org/26/2413/\n--\nMichael",
"msg_date": "Thu, 2 Jan 2020 22:56:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Sorry for letting this thread down for a couple of weeks, but I was\n> hesitating to apply the last patch of the series as the cleanup of the\n> code related to OpenSSL 0.9.8 and 1.0.0 is not that much. An extra\n> argument in favor of the removal is that this can allow more shaving\n> of past Python versions, as proposed by Peter here:\n> https://www.postgresql.org/message-id/98b69261-298c-13d2-f34d-836fd9c29b21@2ndquadrant.com\n\n> So, let's do it.\n\nFWIW, I'm not sure I see why there's a connection between moving up\nthe minimum Python version and minimum OpenSSL version. Nobody is\ninstalling bleeding-edge Postgres on RHEL5, not even me ;-), so I\ndon't especially buy Peter's line of reasoning.\n\nI'm perfectly okay with doing both things in HEAD, I just don't\nsee that doing one is an argument for or against doing the other.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 09:22:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> For now, please note that I have added an entry in the CF app:\n> https://commitfest.postgresql.org/26/2413/\n\nBTW, the referenced patch only removes the configure check for\nSSL_get_current_compression, which is fine, but is that even\nmoving any goalposts? The proposed commit message says the\nfunction exists down to 0.9.8, which is already our minimum.\nThere's nothing to debate here if that statement is accurate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 09:30:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 09:30:42AM -0500, Tom Lane wrote:\n> BTW, the referenced patch only removes the configure check for\n> SSL_get_current_compression, which is fine, but is that even\n> moving any goalposts? The proposed commit message says the\n> function exists down to 0.9.8, which is already our minimum.\n> There's nothing to debate here if that statement is accurate.\n\nOops, sorry for the confusion. There are three patches in the set.\n0001 has been already applied as of 28f4bba, and 0002 as of 7d0bcb0\n(backpatched with a different fix from Daniel to allow the build to\nstill work). The actual patch I am proposing to finish merging is\n0003 as posted here, which is the remaining piece:\nhttps://www.postgresql.org/message-id/20191205083252.GE5064@paquier.xyz\n--\nMichael",
"msg_date": "Fri, 3 Jan 2020 11:33:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Jan 02, 2020 at 09:30:42AM -0500, Tom Lane wrote:\n>> BTW, the referenced patch only removes the configure check for\n>> SSL_get_current_compression\n\n> The actual patch I am proposing to finish merging is\n> 0003 as posted here, which is the remaining piece:\n> https://www.postgresql.org/message-id/20191205083252.GE5064@paquier.xyz\n\nAh. The CF app doesn't understand that (and hence the cfbot ditto),\nso you might want to repost just the currently-proposed patch to get\nthe cfbot to try it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 23:45:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 11:45:37PM -0500, Tom Lane wrote:\n> Ah. The CF app doesn't understand that (and hence the cfbot ditto),\n> so you might want to repost just the currently-proposed patch to get\n> the cfbot to try it.\n\nYes, let's do that. Here you go with a v2. While on it, I have\nnoticed in the docs a mention to OpenSSL 1.0.0 regarding our\nsslcompression parameter in libpq, so a paragraph can be removed.\n--\nMichael",
"msg_date": "Fri, 3 Jan 2020 15:49:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "> On 3 Jan 2020, at 07:49, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jan 02, 2020 at 11:45:37PM -0500, Tom Lane wrote:\n>> Ah. The CF app doesn't understand that (and hence the cfbot ditto),\n>> so you might want to repost just the currently-proposed patch to get\n>> the cfbot to try it.\n> \n> Yes, let's do that. Here you go with a v2. While on it, I have\n> noticed in the docs a mention to OpenSSL 1.0.0 regarding our\n> sslcompression parameter in libpq, so a paragraph can be removed.\n\nLGTM, switching to ready for committer.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 3 Jan 2020 22:57:54 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 09:22:47AM -0500, Tom Lane wrote:\n> FWIW, I'm not sure I see why there's a connection between moving up\n> the minimum Python version and minimum OpenSSL version. Nobody is\n> installing bleeding-edge Postgres on RHEL5, not even me ;-), so I\n> don't especially buy Peter's line of reasoning.\n\nIt seems to me that the line of reasoning was to consider RHEL5 in the\ngarbage for all our dependencies, in a consistent way.\n\n> I'm perfectly okay with doing both things in HEAD, I just don't\n> see that doing one is an argument for or against doing the other.\n\nYes, right. That would be the case if we had direct dependencies\nbetween both, but that has never been the case AFAIK.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 11:17:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 10:57:54PM +0100, Daniel Gustafsson wrote:\n> LGTM, switching to ready for committer.\n\nThanks Daniel. I have looked at that stuff again, and committed the\npatch.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 12:54:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removal of support for OpenSSL 0.9.8 and 1.0.0"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile reading pg_upgrade code to restore the objects on the new\ncluster, I noticed that 5b570d771b8 didn't adjust the database name in\nthe comments explaining the requirements for an extra \"--clean\" for\ntemplate1 and postgres databases. While it's true that both databases\nwill already exist, I found it confusing to mention both names when\nonly one is processed for each code path.",
"msg_date": "Thu, 5 Dec 2019 10:17:54 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Misleading comment in pg_upgrade.c"
},
{
"msg_contents": "> On 5 Dec 2019, at 10:17, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> While reading pg_upgrade code to restore the objects on the new\n> cluster, I noticed that 5b570d771b8 didn't adjust the database name in\n> the comments explaining the requirements for an extra \"--clean\" for\n> template1 and postgres databases. While it's true that both databases\n> will already exist, I found it confusing to mention both names when\n> only one is processed for each code path.\n\nAgreed, I think this reads better.\n\ncheers ./daniel\n\n\n\n",
"msg_date": "Thu, 5 Dec 2019 23:45:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Misleading comment in pg_upgrade.c"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 11:45:09PM +0100, Daniel Gustafsson wrote:\n> > On 5 Dec 2019, at 10:17, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > While reading pg_upgrade code to restore the objects on the new\n> > cluster, I noticed that 5b570d771b8 didn't adjust the database name in\n> > the comments explaining the requirements for an extra \"--clean\" for\n> > template1 and postgres databases. While it's true that both databases\n> > will already exist, I found it confusing to mention both names when\n> > only one is processed for each code path.\n> \n> Agreed, I think this reads better.\n\nFYI, this patch was applied:\n\n\tcommit 690c880269\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\tDate: Fri Dec 6 11:55:04 2019 +0900\n\t\n\t Improve some comments in pg_upgrade.c\n\t\n\t When restoring database schemas on a new cluster, database \"template1\"\n\t is processed first, followed by all other databases in parallel,\n\t including \"postgres\". Both \"postgres\" and \"template1\" have some extra\n\t handling to propagate each one's properties, but comments were confusing\n\t regarding which one is processed where.\n\t\n\t Author: Julien Rouhaud\n\t Reviewed-by: Daniel Gustafsson\n\t Discussion: https://postgr.es/m/CAOBaU_a2iviTG7FE10yO_gcW+zQCHNFhRA_NDiktf3UR65BHdw@mail.gmail.com\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 21 Dec 2019 12:46:45 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Misleading comment in pg_upgrade.c"
},
{
"msg_contents": "Le sam. 21 déc. 2019 à 18:46, Bruce Momjian <bruce@momjian.us> a écrit :\n\n> On Thu, Dec 5, 2019 at 11:45:09PM +0100, Daniel Gustafsson wrote:\n> > > On 5 Dec 2019, at 10:17, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > While reading pg_upgrade code to restore the objects on the new\n> > > cluster, I noticed that 5b570d771b8 didn't adjust the database name in\n> > > the comments explaining the requirements for an extra \"--clean\" for\n> > > template1 and postgres databases. While it's true that both databases\n> > > will already exist, I found it confusing to mention both names when\n> > > only one is processed for each code path.\n> >\n> > Agreed, I think this reads better.\n>\n> FYI, this patch was applied:\n>\n> commit 690c880269\n> Author: Michael Paquier <michael@paquier.xyz>\n> Date: Fri Dec 6 11:55:04 2019 +0900\n>\n> Improve some comments in pg_upgrade.c\n>\n> When restoring database schemas on a new cluster, database\n> \"template1\"\n> is processed first, followed by all other databases in\n> parallel,\n> including \"postgres\". Both \"postgres\" and \"template1\" have\n> some extra\n> handling to propagate each one's properties, but comments were\n> confusing\n> regarding which one is processed where.\n>\n> Author: Julien Rouhaud\n> Reviewed-by: Daniel Gustafsson\n> Discussion:\n> https://postgr.es/m/CAOBaU_a2iviTG7FE10yO_gcW+zQCHNFhRA_NDiktf3UR65BHdw@mail.gmail.com\n\n\nThanks Bruce, and thanks Michael for pushing!\n\nLe sam. 21 déc. 2019 à 18:46, Bruce Momjian <bruce@momjian.us> a écrit :On Thu, Dec 5, 2019 at 11:45:09PM +0100, Daniel Gustafsson wrote:\n> > On 5 Dec 2019, at 10:17, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > While reading pg_upgrade code to restore the objects on the new\n> > cluster, I noticed that 5b570d771b8 didn't adjust the database name in\n> > the comments explaining the requirements for an extra \"--clean\" for\n> > template1 and postgres databases. While it's true that both databases\n> > will already exist, I found it confusing to mention both names when\n> > only one is processed for each code path.\n> \n> Agreed, I think this reads better.\n\nFYI, this patch was applied:\n\n commit 690c880269\n Author: Michael Paquier <michael@paquier.xyz>\n Date: Fri Dec 6 11:55:04 2019 +0900\n\n Improve some comments in pg_upgrade.c\n\n When restoring database schemas on a new cluster, database \"template1\"\n is processed first, followed by all other databases in parallel,\n including \"postgres\". Both \"postgres\" and \"template1\" have some extra\n handling to propagate each one's properties, but comments were confusing\n regarding which one is processed where.\n\n Author: Julien Rouhaud\n Reviewed-by: Daniel Gustafsson\n Discussion: https://postgr.es/m/CAOBaU_a2iviTG7FE10yO_gcW+zQCHNFhRA_NDiktf3UR65BHdw@mail.gmail.comThanks Bruce, and thanks Michael for pushing!",
"msg_date": "Sat, 21 Dec 2019 21:42:10 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Misleading comment in pg_upgrade.c"
}
] |
[
{
"msg_contents": "When hacking the Zedstore, we need to get a more accurate statistic for\nzedstore and we\nfaced some restrictions:\n1) acquire_sample_rows() always use RelationGetNumberOfBlocks to generate\nsampling block\n numbers, this is not friendly for zedstore which wants to use a logical\nblock number and might also\n not friendly to non-block-oriented Table AMs.\n2) columns of zedstore table store separately, so columns in a row have a\ndifferent physical position,\n tid in a tuple is invalid for zedstore which means the correlation\nstatistic is incorrect for zedstore.\n3) RelOptInfo->pages is not correct for Zedstore if we only access partial\nof the columns which make\n the IO cost much higher than the actual cost.\n\nFor 1) and 2), we propose to extend existing ANALYZE-scan table AM routines\nin patch\n\"0001-ANALYZE-tableam-API-change.patch\" which add three more APIs:\nscan_analyze_beginscan(), scan_analyze_sample_tuple(),\nscan_analyze_endscan(). This provides\nmore convenience and table AMs can take more control of every step of\nsampling rows. Meanwhile,\nwith the new structure named \"AcquireSampleContext\", we can acquire extra\ninfo (eg: physical position,\nphysical size) except the real columns values.\n\nFor 3), we hope we can have a similar mechanism with RelOptInfo->rows which\nis calculated from\n (RelOptInfo->tuples * Selectivity), we can calculate RelOptInfo->pages\nwith a page selectivity which\nis base on the selected zedstore columns.\n0002-Planner-can-estimate-the-pages-based-on-the-columns-.patch\nshows one idea that adding the `stadiskfrac` to pg_statistic and planner\nuse it to estimate the\nRelOptInfo->pages.\n\n0003-ZedStore-use-extended-ANAlYZE-API.patch is attached to only show how\nZedstore use the\nprevious patches to achieve:\n1. use logical block id to acquire the sample rows.\n2. can only acquire sample rows from specified column c1, this is used when\nuser only analyze table\n on specified columns eg: \"analyze zs (c1)\".\n3 when ANALYZE, zedstore table AM provided extra disksize info, then\nANALYZE compute the\n physical fraction statistic of each column and planner use it to\nestimate the IO cost based on\n the selected columns.\n\nThanks,\nPengzhou",
"msg_date": "Thu, 5 Dec 2019 18:14:17 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "[Proposal] Extend TableAM routines for ANALYZE scan"
},
{
"msg_contents": "Hello,\n\nOn Thu, Dec 5, 2019 at 11:14 AM Pengzhou Tang <ptang@pivotal.io> wrote:\n>\n> When hacking the Zedstore, we need to get a more accurate statistic for zedstore and we\n> faced some restrictions:\n> 1) acquire_sample_rows() always use RelationGetNumberOfBlocks to generate sampling block\n> numbers, this is not friendly for zedstore which wants to use a logical block number and might also\n> not friendly to non-block-oriented Table AMs.\n> 2) columns of zedstore table store separately, so columns in a row have a different physical position,\n> tid in a tuple is invalid for zedstore which means the correlation statistic is incorrect for zedstore.\n> 3) RelOptInfo->pages is not correct for Zedstore if we only access partial of the columns which make\n> the IO cost much higher than the actual cost.\n>\n> For 1) and 2), we propose to extend existing ANALYZE-scan table AM routines in patch\n> \"0001-ANALYZE-tableam-API-change.patch\" which add three more APIs:\n> scan_analyze_beginscan(), scan_analyze_sample_tuple(), scan_analyze_endscan(). This provides\n> more convenience and table AMs can take more control of every step of sampling rows. Meanwhile,\n> with the new structure named \"AcquireSampleContext\", we can acquire extra info (eg: physical position,\n> physical size) except the real columns values.\n>\n> For 3), we hope we can have a similar mechanism with RelOptInfo->rows which is calculated from\n> (RelOptInfo->tuples * Selectivity), we can calculate RelOptInfo->pages with a page selectivity which\n> is base on the selected zedstore columns. 0002-Planner-can-estimate-the-pages-based-on-the-columns-.patch\n> shows one idea that adding the `stadiskfrac` to pg_statistic and planner use it to estimate the\n> RelOptInfo->pages.\n>\n> 0003-ZedStore-use-extended-ANAlYZE-API.patch is attached to only show how Zedstore use the\n> previous patches to achieve:\n> 1. use logical block id to acquire the sample rows.\n> 2. can only acquire sample rows from specified column c1, this is used when user only analyze table\n> on specified columns eg: \"analyze zs (c1)\".\n> 3 when ANALYZE, zedstore table AM provided extra disksize info, then ANALYZE compute the\n> physical fraction statistic of each column and planner use it to estimate the IO cost based on\n> the selected columns.\n\nI couldn't find an entry for that patchset in the next commitfest.\nCould you register it so that it won't be forgotten?\n\n\n",
"msg_date": "Mon, 23 Dec 2019 13:51:28 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Extend TableAM routines for ANALYZE scan"
}
] |
[
{
"msg_contents": "Hi,\nIt was observed that when we try to connect through a user/database created\nusing\nsimplified Chinese characters on EUC_CN server_encoding, it fails giving\nerror that\nthe object does not exists. Whereas if we query the system table we can\nfind their entries\nthere.\n\nData setup:\n.) set the locale to zh_CN.UTF8\n[neha@localhost bin]$ locale\nLANG=zh_CN.UTF-8\nLC_CTYPE=\"zh_CN.UTF-8\"\nLC_NUMERIC=zh_CN.UTF-8\nLC_TIME=zh_CN.UTF-8\nLC_COLLATE=\"zh_CN.UTF-8\"\nLC_MONETARY=zh_CN.UTF-8\nLC_MESSAGES=\"zh_CN.UTF-8\"\nLC_PAPER=zh_CN.UTF-8\nLC_NAME=\"zh_CN.UTF-8\"\nLC_ADDRESS=\"zh_CN.UTF-8\"\nLC_TELEPHONE=\"zh_CN.UTF-8\"\nLC_MEASUREMENT=zh_CN.UTF-8\nLC_IDENTIFICATION=\"zh_CN.UTF-8\"\nLC_ALL=\n\n.) Perform initdb\n[neha@localhost bin]$ ./initdb -D /tmp/data --locale=zh_CN\n\nObservation:\npostgres=# show server_encoding ;\n server_encoding\n-----------------\n EUC_CN\n(1 行记录)\n\npostgres=# show client_encoding ;\n client_encoding\n-----------------\n UTF8\n(1 行记录)\n\npostgres=# create user 伐角 with password '规';\nCREATE ROLE\npostgres=# create database 谨角 ;\nCREATE DATABASE\npostgres=# alter role 伐角 with login ;\nALTER ROLE\npostgres=# \\c - 伐角\nFATAL: role \"伐角\" does not exist\nPrevious connection kept\npostgres=# \\c 谨角\nFATAL: database \"谨角\" does not exist\nPrevious connection kept\npostgres=# select * from pg_user where usename='伐角';\n usename | usesysid | usecreatedb | usesuper | userepl | usebypassrls |\n passwd | valuntil | useconfig\n---------+----------+-------------+----------+---------+--------------+----------+----------+-----------\n 伐角 | 16384 | f | f | f | f |\n******** | |\n(1 row)\npostgres=# select * from pg_database where datname='谨角';\n datname | datdba | encoding | datcollate | datctype | datistemplate |\ndatallowconn | datconnlimit | datlastsysoid | datfrozenxid | datminmxid |\ndattablespace | datacl\n---------+--------+----------+------------+----------+---------------+--------------+--------------+---------------+--------------+------------+---------------+--------\n 谨角 | 10 | 2 | zh_CN | zh_CN | f | t\n | -1 | 13286 | 561 | 1 |\n 1663 |\n(1 row)\n\n\nJust wondering, is there anything wrong I am doing or it's a bug?\n\n\nThanks.\n--\nRegards,\nNeha Sharma\n\nHi,It was observed that when we try to connect through a user/database created usingsimplified Chinese characters on EUC_CN server_encoding, it fails giving error thatthe object does not exists. Whereas if we query the system table we can find their entriesthere.Data setup:.) set the locale to zh_CN.UTF8[neha@localhost bin]$ localeLANG=zh_CN.UTF-8LC_CTYPE=\"zh_CN.UTF-8\"LC_NUMERIC=zh_CN.UTF-8LC_TIME=zh_CN.UTF-8LC_COLLATE=\"zh_CN.UTF-8\"LC_MONETARY=zh_CN.UTF-8LC_MESSAGES=\"zh_CN.UTF-8\"LC_PAPER=zh_CN.UTF-8LC_NAME=\"zh_CN.UTF-8\"LC_ADDRESS=\"zh_CN.UTF-8\"LC_TELEPHONE=\"zh_CN.UTF-8\"LC_MEASUREMENT=zh_CN.UTF-8LC_IDENTIFICATION=\"zh_CN.UTF-8\"LC_ALL=.) Perform initdb[neha@localhost bin]$ ./initdb -D /tmp/data --locale=zh_CNObservation:postgres=# show server_encoding ; server_encoding ----------------- EUC_CN(1 行记录)postgres=# show client_encoding ; client_encoding ----------------- UTF8(1 行记录)postgres=# create user 伐角 with password '规';CREATE ROLEpostgres=# create database 谨角 ;CREATE DATABASEpostgres=# alter role 伐角 with login ;ALTER ROLEpostgres=# \\c - 伐角FATAL: role \"伐角\" does not existPrevious connection keptpostgres=# \\c 谨角 FATAL: database \"谨角\" does not existPrevious connection keptpostgres=# select * from pg_user where usename='伐角'; usename | usesysid | usecreatedb | usesuper | userepl | usebypassrls | passwd | valuntil | useconfig ---------+----------+-------------+----------+---------+--------------+----------+----------+----------- 伐角 | 16384 | f | f | f | f | ******** | | (1 row)postgres=# select * from pg_database where datname='谨角'; datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | datminmxid | dattablespace | datacl ---------+--------+----------+------------+----------+---------------+--------------+--------------+---------------+--------------+------------+---------------+-------- 谨角 | 10 | 2 | zh_CN | zh_CN | f | t | -1 | 13286 | 561 | 1 | 1663 | (1 row)Just wondering, is there anything wrong I am doing or it's a bug?Thanks.--Regards,Neha Sharma",
"msg_date": "Thu, 5 Dec 2019 16:21:12 +0530",
"msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[EUC_CN] Failed to connect through user created user/database using\n simplified Chinese characters"
},
{
"msg_contents": "Neha Sharma <neha.sharma@enterprisedb.com> writes:\n> It was observed that when we try to connect through a user/database\n> created using simplified Chinese characters on EUC_CN server_encoding,\n> it fails giving error that the object does not exists. Whereas if we\n> query the system table we can find their entries there.\n\nThis looks like an encoding conversion problem. The connection request\nhas to spell the user name bitwise identically to the way it is stored\nin pg_authid; no conversions will be applied at that point. You have\n\n> postgres=# show server_encoding ;\n> server_encoding\n> -----------------\n> EUC_CN\n> (1 行记录)\n\n> postgres=# show client_encoding ;\n> client_encoding\n> -----------------\n> UTF8\n> (1 行记录)\n\n> postgres=# create user 伐角 with password '规';\n> CREATE ROLE\n\nso what this did, likely, is transmit the characters to the server\nin UTF8, then the server converted them to EUC_CN, and stored them\nin the catalogs in EUC_CN. But here:\n\n> postgres=# \\c - 伐角\n> FATAL: role \"伐角\" does not exist\n> Previous connection kept\n\nthose characters are again being sent to the server in UTF8,\nand it has no context that would tell it to convert to EUC_CN.\n\nThis is all pretty messy, and I've not heard any fix proposals that\nwouldn't amount to shifting the pain to somebody else's use-case.\nThe short answer, if you want to use non-ASCII characters in user names,\npasswords, or database names, is to always run the server and the client\nwith the same encodings.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Dec 2019 10:50:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EUC_CN] Failed to connect through user created user/database\n using simplified Chinese characters"
}
] |
[
{
"msg_contents": "We have a Postgres 10 database that we recently upgraded to Postgres 12 using pg_upgrade. We recently discovered that there are rows in one of the tables that have duplicate primary keys:\n\nrecord_loader=# \\d loader.sync\n Table \"loader.sync\"\n Column | Type | Collation | Nullable | Default\n-------------------+--------------------------+-----------+----------+---------\n source | text | | not null |\n natural_key | text | | not null |\n payload | jsonb | | |\n dispatched | timestamp with time zone | | not null | now()\n initial_load_id | text | | |\n deleted_load_id | text | | |\n created_timestamp | timestamp with time zone | | | now()\n updated_timestamp | timestamp with time zone | | | now()\n deleted_timestamp | timestamp with time zone | | |\nIndexes:\n \"sync_pkey\" PRIMARY KEY, btree (source, natural_key)\nPublications:\n \"debezium\"\n\nThis table is modified via triggers that fire off when a COPY command inserts many rows into another table.\n\nHere are two example duplicate rows:\n\n# SELECT xmin, xmax, cmin, cmax, source, md5(natural_key) AS natural_key_hash, dispatched, created_timestamp, updated_timestamp, deleted_timestamp FROM loader.sync WHERE (source, natural_key) = ('ok_lease', '...') ORDER BY xmin::text::int, cmin::text::int;\n-[ RECORD 1 ]-----+---------------------------------\nxmin | 116649\nxmax | 0\ncmin | 5304404\ncmax | 5304404\nsource | ok_lease\nnatural_key_hash | de3e9a567b90025c3399c4c63c823fe9\ndispatched | 2019-11-24 05:09:36.099686+00\ncreated_timestamp | 2019-11-24 05:09:36.099686+00\nupdated_timestamp | 2019-11-24 05:09:36.099686+00\ndeleted_timestamp |\n-[ RECORD 2 ]-----+---------------------------------\nxmin | 116649\nxmax | 118583\ncmin | 5312208\ncmax | 5312208\nsource | ok_lease\nnatural_key_hash | de3e9a567b90025c3399c4c63c823fe9\ndispatched | 2019-11-10 05:09:24.214964+00\ncreated_timestamp | 2019-05-17 21:24:19.558219+00\nupdated_timestamp | 2019-11-24 05:09:36.099686+00\ndeleted_timestamp | 2019-11-24 05:09:36.099686+00\n\nIt appears that the second row was in place originally, then got updated by a trigger (and even deleted later on, although it doesn't appear that the delete transaction got committed), and then the first row was inserted within the same transaction that updated the second row.\n\nAnother example:\n-[ RECORD 1 ]-----+---------------------------------\nxmin | 116649\nxmax | 0\ncmin | 5304403\ncmax | 5304403\nsource | ok_lease\nnatural_key_hash | 1c8031348701a32cb5fee26839d6b0b4\ndispatched | 2019-11-10 05:09:24.214964+00\ncreated_timestamp | 2019-05-31 06:00:33.765547+00\nupdated_timestamp | 2019-11-24 05:09:36.099686+00\ndeleted_timestamp | 2019-11-24 05:09:36.099686+00\n-[ RECORD 2 ]-----+---------------------------------\nxmin | 116649\nxmax | 0\ncmin | 5304404\ncmax | 5304404\nsource | ok_lease\nnatural_key_hash | 1c8031348701a32cb5fee26839d6b0b4\ndispatched | 2019-11-24 05:09:36.099686+00\ncreated_timestamp | 2019-11-24 05:09:36.099686+00\nupdated_timestamp | 2019-11-24 05:09:36.099686+00\ndeleted_timestamp |\n\nBoth examples have in common that the two duplicate rows were touched within the same transaction.\n\nThis database runs inside Docker, with the data directory bind-mounted to a reflink-enabled XFS filesystem. The VM is running Debian's 4.19.16-1~bpo9+1 kernel inside an AWS EC2 instance. We have Debezium stream data from this database via pgoutput.\n\nRecreating the primary key confirms that the constraint doesn't (or at least shouldn't) permit these duplicate rows:\n\nrecord_loader=# BEGIN;\nBEGIN\nrecord_loader=# ALTER TABLE loader.sync DROP CONSTRAINT sync_pkey;\nALTER TABLE\nrecord_loader=# ALTER TABLE loader.sync ADD CONSTRAINT sync_pkey PRIMARY KEY (source, natural_key);\nERROR: could not create unique index \"sync_pkey\"\nDETAIL: Key (source, natural_key)=(ok_lease, ...) is duplicated.\nCONTEXT: parallel worker\n\nAny ideas on what might cause this behavior?\n\nThanks,\n\nAlex\n\n",
"msg_date": "Thu, 5 Dec 2019 21:14:12 +0000",
"msg_from": "Alex Adriaanse <alex@oseberg.io>",
"msg_from_op": true,
"msg_subject": "Corruption with duplicate primary key"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 1:14 PM Alex Adriaanse <alex@oseberg.io> wrote:\n> We have a Postgres 10 database that we recently upgraded to Postgres 12 using pg_upgrade. We recently discovered that there are rows in one of the tables that have duplicate primary keys:\n\nWhat's the timeline here? In other words, does it look like these rows\nwere updated and/or deleted before, around the same time as, or after\nthe upgrade?\n\nThe symptoms you report seem fairly generic to me, at least at first\nblush. They could be caused by storage level inconsistencies that\ncould have many causes.\n\n> This database runs inside Docker, with the data directory bind-mounted to a reflink-enabled XFS filesystem. The VM is running Debian's 4.19.16-1~bpo9+1 kernel inside an AWS EC2 instance. We have Debezium stream data from this database via pgoutput.\n\nThat seems suspicious, since reflink support for XFS is rather immature.\n\nHow did you invoke pg_upgrade? Did you use the --link (hard link) option?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 5 Dec 2019 15:34:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Corruption with duplicate primary key"
},
{
"msg_contents": "On Thu, Dec 05, 2019 at 09:14:12PM +0000, Alex Adriaanse wrote:\n>We have a Postgres 10 database that we recently upgraded to Postgres 12 using pg_upgrade. We recently discovered that there are rows in one of the tables that have duplicate primary keys:\n>\n>record_loader=# \\d loader.sync\n> Table \"loader.sync\"\n> Column | Type | Collation | Nullable | Default\n>-------------------+--------------------------+-----------+----------+---------\n> source | text | | not null |\n> natural_key | text | | not null |\n> payload | jsonb | | |\n> dispatched | timestamp with time zone | | not null | now()\n> initial_load_id | text | | |\n> deleted_load_id | text | | |\n> created_timestamp | timestamp with time zone | | | now()\n> updated_timestamp | timestamp with time zone | | | now()\n> deleted_timestamp | timestamp with time zone | | |\n>Indexes:\n> \"sync_pkey\" PRIMARY KEY, btree (source, natural_key)\n>Publications:\n> \"debezium\"\n>\n>This table is modified via triggers that fire off when a COPY command inserts many rows into another table.\n>\n>Here are two example duplicate rows:\n>\n># SELECT xmin, xmax, cmin, cmax, source, md5(natural_key) AS natural_key_hash, dispatched, created_timestamp, updated_timestamp, deleted_timestamp FROM loader.sync WHERE (source, natural_key) = ('ok_lease', '...') ORDER BY xmin::text::int, cmin::text::int;\n>-[ RECORD 1 ]-----+---------------------------------\n>xmin | 116649\n>xmax | 0\n>cmin | 5304404\n>cmax | 5304404\n>source | ok_lease\n>natural_key_hash | de3e9a567b90025c3399c4c63c823fe9\n>dispatched | 2019-11-24 05:09:36.099686+00\n>created_timestamp | 2019-11-24 05:09:36.099686+00\n>updated_timestamp | 2019-11-24 05:09:36.099686+00\n>deleted_timestamp |\n>-[ RECORD 2 ]-----+---------------------------------\n>xmin | 116649\n>xmax | 118583\n>cmin | 5312208\n>cmax | 5312208\n>source | ok_lease\n>natural_key_hash | de3e9a567b90025c3399c4c63c823fe9\n>dispatched | 2019-11-10 05:09:24.214964+00\n>created_timestamp | 2019-05-17 21:24:19.558219+00\n>updated_timestamp | 2019-11-24 05:09:36.099686+00\n>deleted_timestamp | 2019-11-24 05:09:36.099686+00\n>\n>It appears that the second row was in place originally, then got updated by a trigger (and even deleted later on, although it doesn't appear that the delete transaction got committed), and then the first row was inserted within the same transaction that updated the second row.\n>\n>Another example:\n>-[ RECORD 1 ]-----+---------------------------------\n>xmin | 116649\n>xmax | 0\n>cmin | 5304403\n>cmax | 5304403\n>source | ok_lease\n>natural_key_hash | 1c8031348701a32cb5fee26839d6b0b4\n>dispatched | 2019-11-10 05:09:24.214964+00\n>created_timestamp | 2019-05-31 06:00:33.765547+00\n>updated_timestamp | 2019-11-24 05:09:36.099686+00\n>deleted_timestamp | 2019-11-24 05:09:36.099686+00\n>-[ RECORD 2 ]-----+---------------------------------\n>xmin | 116649\n>xmax | 0\n>cmin | 5304404\n>cmax | 5304404\n>source | ok_lease\n>natural_key_hash | 1c8031348701a32cb5fee26839d6b0b4\n>dispatched | 2019-11-24 05:09:36.099686+00\n>created_timestamp | 2019-11-24 05:09:36.099686+00\n>updated_timestamp | 2019-11-24 05:09:36.099686+00\n>deleted_timestamp |\n>\n>Both examples have in common that the two duplicate rows were touched within the same transaction.\n>\n>This database runs�inside Docker, with the data directory bind-mounted to a reflink-enabled XFS filesystem. The VM is running Debian's 4.19.16-1~bpo9+1 kernel inside an AWS EC2 instance. We have Debezium stream data from this database via pgoutput.\n>\n>Recreating the primary key confirms that the constraint doesn't (or at least shouldn't) permit these duplicate rows:\n>\n>record_loader=# BEGIN;\n>BEGIN\n>record_loader=# ALTER TABLE loader.sync DROP CONSTRAINT sync_pkey;\n>ALTER TABLE\n>record_loader=# ALTER TABLE loader.sync ADD CONSTRAINT sync_pkey PRIMARY KEY (source, natural_key);\n>ERROR: could not create unique index \"sync_pkey\"\n>DETAIL: Key (source, natural_key)=(ok_lease, ...) is duplicated.\n>CONTEXT: parallel worker\n>\n>Any ideas on what might cause this behavior?\n>\n\nNot sure. At first I thought maybe this might be due to collations\nchanging and breaking the index silently. What collation are you using?\n\nA couple questions:\n\n1) When you do the queries, do they use index scan or sequential scan?\nPerhaps it does sequential scan, and if you force index scan (e.g. by\nrewriting the query) it'll only find one of those rows.\n\n2) Can you check in backups if this data corruption was present in the\nPG10 cluster, before running pg_upgrade? \n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 6 Dec 2019 00:45:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Corruption with duplicate primary key"
},
{
"msg_contents": "Re: \" It appears that the second row was in place originally, then got updated by a trigger (and even deleted later on, although it doesn't appear that the delete transaction got committed), and then the first row was inserted within the same transaction that updated the second row.\"\r\n\r\nIf you have BEFORE triggers, and a BEFORE trigger signaled failure with RETURN NULL, then this is one known (and documented) issue that I think could cause the behavior you're reporting:\r\n\r\n https://www.postgresql-archive.org/BEFORE-triggers-that-return-NULL-can-circumvent-referential-integrity-tt6056390.html#none\r\n\r\nIt's hard to say if this is the cause or not, but if you have any BEFORE triggers that RETURN NULL, you might want to review the documentation very carefully.\r\n\r\nthanks,\r\n\r\n /Jim F\r\n\r\nOn 12/5/19, 6:45 PM, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com> wrote:\r\n\r\n On Thu, Dec 05, 2019 at 09:14:12PM +0000, Alex Adriaanse wrote:\r\n >We have a Postgres 10 database that we recently upgraded to Postgres 12 using pg_upgrade. We recently discovered that there are rows in one of the tables that have duplicate primary keys:\r\n >\r\n >record_loader=# \\d loader.sync\r\n > Table \"loader.sync\"\r\n > Column | Type | Collation | Nullable | Default\r\n >-------------------+--------------------------+-----------+----------+---------\r\n > source | text | | not null |\r\n > natural_key | text | | not null |\r\n > payload | jsonb | | |\r\n > dispatched | timestamp with time zone | | not null | now()\r\n > initial_load_id | text | | |\r\n > deleted_load_id | text | | |\r\n > created_timestamp | timestamp with time zone | | | now()\r\n > updated_timestamp | timestamp with time zone | | | now()\r\n > deleted_timestamp | timestamp with time zone | | |\r\n >Indexes:\r\n > \"sync_pkey\" PRIMARY KEY, btree (source, natural_key)\r\n >Publications:\r\n > \"debezium\"\r\n >\r\n >This table is modified via triggers that fire off when a COPY command inserts many rows into another table.\r\n >\r\n >Here are two example duplicate rows:\r\n >\r\n ># SELECT xmin, xmax, cmin, cmax, source, md5(natural_key) AS natural_key_hash, dispatched, created_timestamp, updated_timestamp, deleted_timestamp FROM loader.sync WHERE (source, natural_key) = ('ok_lease', '...') ORDER BY xmin::text::int, cmin::text::int;\r\n >-[ RECORD 1 ]-----+---------------------------------\r\n >xmin | 116649\r\n >xmax | 0\r\n >cmin | 5304404\r\n >cmax | 5304404\r\n >source | ok_lease\r\n >natural_key_hash | de3e9a567b90025c3399c4c63c823fe9\r\n >dispatched | 2019-11-24 05:09:36.099686+00\r\n >created_timestamp | 2019-11-24 05:09:36.099686+00\r\n >updated_timestamp | 2019-11-24 05:09:36.099686+00\r\n >deleted_timestamp |\r\n >-[ RECORD 2 ]-----+---------------------------------\r\n >xmin | 116649\r\n >xmax | 118583\r\n >cmin | 5312208\r\n >cmax | 5312208\r\n >source | ok_lease\r\n >natural_key_hash | de3e9a567b90025c3399c4c63c823fe9\r\n >dispatched | 2019-11-10 05:09:24.214964+00\r\n >created_timestamp | 2019-05-17 21:24:19.558219+00\r\n >updated_timestamp | 2019-11-24 05:09:36.099686+00\r\n >deleted_timestamp | 2019-11-24 05:09:36.099686+00\r\n >\r\n >It appears that the second row was in place originally, then got updated by a trigger (and even deleted later on, although it doesn't appear that the delete transaction got committed), and then the first row was inserted within the same transaction that updated the second row.\r\n >\r\n >Another example:\r\n >-[ RECORD 1 ]-----+---------------------------------\r\n >xmin | 116649\r\n >xmax | 0\r\n >cmin | 5304403\r\n >cmax | 5304403\r\n >source | ok_lease\r\n >natural_key_hash | 1c8031348701a32cb5fee26839d6b0b4\r\n >dispatched | 2019-11-10 05:09:24.214964+00\r\n >created_timestamp | 2019-05-31 06:00:33.765547+00\r\n >updated_timestamp | 2019-11-24 05:09:36.099686+00\r\n >deleted_timestamp | 2019-11-24 05:09:36.099686+00\r\n >-[ RECORD 2 ]-----+---------------------------------\r\n >xmin | 116649\r\n >xmax | 0\r\n >cmin | 5304404\r\n >cmax | 5304404\r\n >source | ok_lease\r\n >natural_key_hash | 1c8031348701a32cb5fee26839d6b0b4\r\n >dispatched | 2019-11-24 05:09:36.099686+00\r\n >created_timestamp | 2019-11-24 05:09:36.099686+00\r\n >updated_timestamp | 2019-11-24 05:09:36.099686+00\r\n >deleted_timestamp |\r\n >\r\n >Both examples have in common that the two duplicate rows were touched within the same transaction.\r\n >\r\n >This database runs inside Docker, with the data directory bind-mounted to a reflink-enabled XFS filesystem. The VM is running Debian's 4.19.16-1~bpo9+1 kernel inside an AWS EC2 instance. We have Debezium stream data from this database via pgoutput.\r\n >\r\n >Recreating the primary key confirms that the constraint doesn't (or at least shouldn't) permit these duplicate rows:\r\n >\r\n >record_loader=# BEGIN;\r\n >BEGIN\r\n >record_loader=# ALTER TABLE loader.sync DROP CONSTRAINT sync_pkey;\r\n >ALTER TABLE\r\n >record_loader=# ALTER TABLE loader.sync ADD CONSTRAINT sync_pkey PRIMARY KEY (source, natural_key);\r\n >ERROR: could not create unique index \"sync_pkey\"\r\n >DETAIL: Key (source, natural_key)=(ok_lease, ...) is duplicated.\r\n >CONTEXT: parallel worker\r\n >\r\n >Any ideas on what might cause this behavior?\r\n >\r\n \r\n Not sure. At first I thought maybe this might be due to collations\r\n changing and breaking the index silently. What collation are you using?\r\n \r\n A couple questions:\r\n \r\n 1) When you do the queries, do they use index scan or sequential scan?\r\n Perhaps it does sequential scan, and if you force index scan (e.g. by\r\n rewriting the query) it'll only find one of those rows.\r\n \r\n 2) Can you check in backups if this data corruption was present in the\r\n PG10 cluster, before running pg_upgrade? \r\n \r\n regards\r\n \r\n -- \r\n Tomas Vondra http://www.2ndQuadrant.com\r\n PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \r\n \r\n \r\n \r\n\r\n",
"msg_date": "Mon, 9 Dec 2019 17:05:39 +0000",
"msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Corruption with duplicate primary key"
},
{
"msg_contents": "On Thu, December 5, 2019 at 5:34 PM Peter Geoghegan wrote:\n> > We have a Postgres 10 database that we recently upgraded to Postgres 12 using pg_upgrade. We recently discovered that there are rows in one of the tables that have duplicate primary keys:\n> \n> What's the timeline here? In other words, does it look like these rows\n> were updated and/or deleted before, around the same time as, or after\n> the upgrade?\n\nThe Postgres 12 upgrade was performed on 2019-11-22, so the affected rows were modified after this upgrade (although some of the rows were originally inserted before then, before they were modified/duplicated).\n\n> > This database runs inside Docker, with the data directory bind-mounted to a reflink-enabled XFS filesystem. The VM is running Debian's 4.19.16-1~bpo9+1 kernel inside an AWS EC2 instance. We have Debezium stream data from this database via pgoutput.\n> \n> That seems suspicious, since reflink support for XFS is rather immature.\n\nGood point. Looking at kernel commits since 4.19.16 it appears that there have been a few bug fixes in later kernel versions that address a few XFS corruption issues. Regardless of whether FS bugs are responsible of this corruption I'll plan on upgrading to a newer kernel.\n\n> How did you invoke pg_upgrade? Did you use the --link (hard link) option?\n\nYes, we first created a backup using \"cp -a --reflink=always\", ran initdb on the new directory, and then upgraded using \"pg_upgrade -b ... -B ... -d ... -D -k\".\n\nAlex\n\n",
"msg_date": "Wed, 11 Dec 2019 23:42:45 +0000",
"msg_from": "Alex Adriaanse <alex@oseberg.io>",
"msg_from_op": true,
"msg_subject": "Re: Corruption with duplicate primary key"
},
{
"msg_contents": "On Thu., December 5, 2019 at 5:45 PM, Tomas Vondra wrote:\n> At first I thought maybe this might be due to collations\n> changing and breaking the index silently. What collation are you using?\n\nWe're using en_US.utf8. We did not make any collation changes to my knowledge.\n\n> 1) When you do the queries, do they use index scan or sequential scan?\n> Perhaps it does sequential scan, and if you force index scan (e.g. by\n> rewriting the query) it'll only find one of those rows.\n\nBy default it used an index scan. When I re-ran the query today (and confirmed that the query used an index only scan) I did not see any duplicates. If I force a sequential scan using \"SET enable_index[only]scan = false\" the duplicates reappear.\n\nHowever, using a backup from a week ago I see duplicates in both the query that uses an index only scan as well as the query that uses the sequential scan. So somehow over the past week the index got changed to eliminate duplicates.\n\n> 2) Can you check in backups if this data corruption was present in the\n> PG10 cluster, before running pg_upgrade? \n\nSure. I just checked and did not see any corruption in the PG10 pre-upgrade backup. I also re-upgraded that PG10 backup to PG12, and right after the upgrade I did not see any corruption either. I checked using both index scans and sequential scans.\n\nAlex\n\n",
"msg_date": "Wed, 11 Dec 2019 23:46:40 +0000",
"msg_from": "Alex Adriaanse <alex@oseberg.io>",
"msg_from_op": true,
"msg_subject": "Re: Corruption with duplicate primary key"
},
{
"msg_contents": "On Mon, December 9, 2019 at 11:05 AM Finnerty, Jim wrote:\n> If you have BEFORE triggers, and a BEFORE trigger signaled failure with RETURN NULL, then this is one known (and documented) issue that I think could cause the behavior you're reporting:\n> \n> https://www.postgresql-archive.org/BEFORE-triggers-that-return-NULL-can-circumvent-referential-integrity-tt6056390.html#none\n> \n> It's hard to say if this is the cause or not, but if you have any BEFORE triggers that RETURN NULL, you might want to review the documentation very carefully.\n\nWe do have a BEFORE INSERT trigger, but it should never return NULL. This trigger INSERTs into a different table using an ON CONFLICT DO NOTHING clause and then does a RETURN NEW.\n\nAlex\n\n",
"msg_date": "Wed, 11 Dec 2019 23:49:05 +0000",
"msg_from": "Alex Adriaanse <alex@oseberg.io>",
"msg_from_op": true,
"msg_subject": "Re: Corruption with duplicate primary key"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 11:46:40PM +0000, Alex Adriaanse wrote:\n>On Thu., December 5, 2019 at 5:45 PM, Tomas Vondra wrote:\n>> At first I thought maybe this might be due to collations changing and\n>> breaking the index silently. What collation are you using?\n>\n>We're using en_US.utf8. We did not make any collation changes to my\n>knowledge.\n>\n\nWell, the idea was more that glibc got updated and the collations\nchanged because of that (without PostgreSQL having a chance to even\nnotice that).\n\n>> 1) When you do the queries, do they use index scan or sequential\n>> scan? Perhaps it does sequential scan, and if you force index scan\n>> (e.g. by rewriting the query) it'll only find one of those rows.\n>\n>By default it used an index scan. When I re-ran the query today (and\n>confirmed that the query used an index only scan) I did not see any\n>duplicates. If I force a sequential scan using \"SET\n>enable_index[only]scan = false\" the duplicates reappear.\n>\n\nHmmm, that's probably a sign of some sort of index corruption. Clearly,\nwhen a row can't be found through an index, it's invisible to code\nenforcing the unique constraint (relying on the index).\n\n>However, using a backup from a week ago I see duplicates in both the\n>query that uses an index only scan as well as the query that uses the\n>sequential scan. So somehow over the past week the index got changed to\n>eliminate duplicates.\n>\n\nHmmm, that's interesting ... and confusing.\n\nThe good thing is that this is not an upgrade issue, because there was\nno corruption right after the upgrade.\n\nBut then apparently the corruption appeared, and then disappeared for\nsome unknown reason, but only from the index. Puzzling.\n\n>> 2) Can you check in backups if this data corruption was present in\n>> the PG10 cluster, before running pg_upgrade?\n>\n>Sure. I just checked and did not see any corruption in the PG10\n>pre-upgrade backup. I also re-upgraded that PG10 backup to PG12, and\n>right after the upgrade I did not see any corruption either. I checked\n>using both index scans and sequential scans.\n>\n\nOK, thanks. That's valuable piece of information.\n\nHow active is the system and can you do PITR? That is, can you try\nrestoring it into different points in time by replaying WAL? Then we\ncould check narrow-down when the corruption appeared and inspect the WAL\nfrom that period.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 13 Dec 2019 00:25:26 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Corruption with duplicate primary key"
},
{
"msg_contents": "On Thu., December 12, 2019 at 5:25 PM, Tomas Vondra wrote:\n>On Wed, Dec 11, 2019 at 11:46:40PM +0000, Alex Adriaanse wrote:\n>>On Thu., December 5, 2019 at 5:45 PM, Tomas Vondra wrote:\n>>> At first I thought maybe this might be due to collations changing and\n>>> breaking the index silently. What collation are you using?\n>>\n>>We're using en_US.utf8. We did not make any collation changes to my\n>>knowledge.\n>\n>Well, the idea was more that glibc got updated and the collations\n>changed because of that (without PostgreSQL having a chance to even\n>notice that).\n\nClosing the loop on this, I've investigated this some more and it turns out this is exactly what happened. As you suspected, the issue had nothing to do with pg_upgrade or PG12, but rather the glibc upgrade that was seen in Debian Buster. The postgres:10 and postgres:11 images are based on Debian Stretch, whereas postgres:12 is based on Buster.\n\nWhen I kept the database on an older version of Postgres (10 or 11) but switched from the older Docker image to the postgres:12 or debian:buster(-slim) image, manually installing older Postgres packages inside those images, I saw index corruption there too.\n\nThanks for the input!\n\nAlex\n\n",
"msg_date": "Wed, 15 Jan 2020 20:48:08 +0000",
"msg_from": "Alex Adriaanse <alex@oseberg.io>",
"msg_from_op": true,
"msg_subject": "Re: Corruption with duplicate primary key"
}
] |
[
{
"msg_contents": "New users frequently attempt to run PostgreSQL's command line utilities\nfrom the psql prompt.\n\nThey tend to be confused when this appears to do absolutely nothing:\n\n psql=> pg_restore\n psql->\n\nsince they're generally not going to semicolon-terminate the command either.\n\nThe attached patch detects common command names when they appear first on a\nnew input line prints a help message. If the buffer is empty a more\ndetailed message is printed and the input is swallowed. Otherwise, much\nlike how we handle \"help\" etc,\na short message is printed and the input is still added to the buffer.\n\n psql=> pg_restore\n \"pg_restore\" is a command line utility program.\n Use it from the system terminal or command prompt not from psql.\n psql=>\n psql=> select 1\n psql-> pg_restore\n \"pg_restore\" is a command-line utility program not a psql command. See\n\"help\".\n psql->\n\nWording advice would be welcome.\n\nI'd be tempted to backpatch this, since it's one of the things I see users\nconfused by most often now - right up there with pg_hba.conf issues,\nforgetting a semicolon in psql, etc.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 6 Dec 2019 12:58:11 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] print help from psql when user tries to run pg_restore,\n pg_dump etc"
},
{
"msg_contents": "Hello Craig,\n\n> New users frequently attempt to run PostgreSQL's command line utilities\n> from the psql prompt.\n\nAlas, that is true.\n\nI also have the reverse, i.e. SQL commands fed to bash, which does not \nlike it much.\n\n> They tend to be confused when this appears to do absolutely nothing:\n>\n> psql=> pg_restore\n> psql->\n>\n> since they're generally not going to semicolon-terminate the command either.\n>\n> The attached patch detects common command names when they appear first on a\n> new input line prints a help message. If the buffer is empty a more\n> detailed message is printed and the input is swallowed. Otherwise, much\n> like how we handle \"help\" etc,\n> a short message is printed and the input is still added to the buffer.\n>\n> psql=> pg_restore\n> \"pg_restore\" is a command line utility program.\n> Use it from the system terminal or command prompt not from psql.\n\n… prompt, not from psql. (added comma?)\n\n> psql=>\n> psql=> select 1\n> psql-> pg_restore\n> \"pg_restore\" is a command-line utility program not a psql command. See\n> \"help\".\n> psql->\n>\n> Wording advice would be welcome.\n>\n> I'd be tempted to backpatch this, since it's one of the things I see users\n> confused by most often now - right up there with pg_hba.conf issues,\n> forgetting a semicolon in psql, etc.\n\nI doubt that backpathing is reasonable.\n\nAre we that sure that there is no legitimate reason to enter such lines on \npsql, eg:\n\n psql=> SELECT '\n psql'> pg_whatever ...'\n psql-> ...\n\n\nAlthough I can confirm that the problem exists, I'm unsure about whether \npsql should fix it. What are the opinions around?\n\n-- \nFabien.",
"msg_date": "Fri, 6 Dec 2019 14:31:43 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] print help from psql when user tries to run pg_restore,\n pg_dump etc"
}
] |
[
{
"msg_contents": "Hi,\n\nIn few scenarios the message displayed in psql console is not consistent in\nwindows and linux. The execution results from few scenarios in windows and\nlinux is listed below:\n\nIn CentOS\n========================================================\n\n*After transaction idle timeout*postgres=# SET\nidle_in_transaction_session_timeout=300;\nSET\npostgres=# BEGIN;\nBEGIN\npostgres=# SELECT * FROM pg_class;\nFATAL: terminating connection due to idle-in-transaction timeout\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n\n*After pg_terminate_backend from another session:*postgres=# select * from\ndual;\nFATAL: terminating connection due to administrator command\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n\n*Similarly in pg_ctl kill TERM and drop database with (force).*\nIn Windows\n========================================================\n\n*After transaction idle timeout*postgres=# set\nidle_in_transaction_session_timeout=300;\nSET\npostgres=# begin;\nBEGIN\npostgres=# select * from dual\npostgres-# ;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n\n*After pg_terminate_backend from another session:*postgres=# select * from\ndual;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n\n*Similarly in pg_ctl kill TERM and drop database with (force).*There may be\nsome more scenarios which I'm missing.\n\nIt is noticed that in all the 4 cases the message \"FATAL: terminating\nconnection due to administrator command\" does not appear in windows.\n\nHowever the following message is present in the server log file:\nFATAL: terminating connection due to administrator command\n\nThe reason for this looks like:\nWhen the server closes a connection, it sends the ErrorResponse packet, and\nthen closes the socket and terminates the backend process. If the packet is\nreceived before the server closes the connection, the error message is\nreceived in both windows and linux. If the packet is not received before\nthe server closes the connection, the error message is not received in case\nof windows where as in linux it is received.\n\nThere have been a couple of discussion earlier also on this [1] & [2], but\nwe could not find any alternate solution.\n\nOne of the options that msdn suggests in [3] is to use SO_LINGER option, we\nhad tried this option with no luck in solving. One other thing that we had\ntried was to sleep for 1 second before closing the socket, this solution\nworks if the client is active, whereas in case of inactive clients it does\nnot solves the problem. One other thought that we had was to simultaneously\ncheck the connection from psql, when we are waiting for query input in\ngets_interactive function or have a separate thread to check the connection\nstatus periodically, this might work only in case of psql but will not work\nfor application which uses libpq. Amit had also suggested one solution in\n[4], where he proposed 'I have also tried calling closesocket() explicitly\nin our function socket_close which has changed the error message to \"could\nnot receive data from server: Software caused connection abort\n(0x00002745/10053)\".'\nOr\nShould we add some documentation for the above behavior.\n\nThoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJowQypXSKsjws9A%2BnEQDD0-mExHZqFXtJ09N209rCO5A%40mail.gmail.com#0629f079bc59ecdaa0d6ac9f8f2c18ac\n[2]\nhttps://www.postgresql.org/message-id/87k1iy44fd.fsf@news-spur.riddles.org.uk\n[3]\nhttps://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-closesocket\n[4]\nhttps://www.postgresql.org/message-id/CAA4eK1%2BGNyjaPK77y%2Beuh5eAgM75pncG1JYZhxYZF%2BSgS6NpjA%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,In few scenarios the message displayed in psql console is not consistent in windows and linux. The execution results from few scenarios in windows and linux is listed below:In CentOS========================================================After transaction idle timeoutpostgres=# SET idle_in_transaction_session_timeout=300;SETpostgres=# BEGIN;BEGINpostgres=# SELECT * FROM pg_class;FATAL: terminating connection due to idle-in-transaction timeoutserver closed the connection unexpectedlyThis probably means the server terminated abnormallybefore or while processing the request.The connection to the server was lost. Attempting reset: Succeeded.After pg_terminate_backend from another session:postgres=# select * from dual;FATAL: terminating connection due to administrator commandserver closed the connection unexpectedlyThis probably means the server terminated abnormallybefore or while processing the request.The connection to the server was lost. Attempting reset: Succeeded.Similarly in pg_ctl kill TERM and drop database with (force).In Windows========================================================After transaction idle timeoutpostgres=# set idle_in_transaction_session_timeout=300;SETpostgres=# begin;BEGINpostgres=# select * from dualpostgres-# ;server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.The connection to the server was lost. Attempting reset: Succeeded.After pg_terminate_backend from another session:postgres=# select * from dual;server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.The connection to the server was lost. Attempting reset: Succeeded.Similarly in pg_ctl kill TERM and drop database with (force).There may be some more scenarios which I'm missing.It is noticed that in all the 4 cases the message \"FATAL: terminating connection due to administrator command\" does not appear in windows.However the following message is present in the server log file:FATAL: terminating connection due to administrator commandThe reason for this looks like:When the server closes a connection, it sends the ErrorResponse packet, and then closes the socket and terminates the backend process. If the packet is received before the server closes the connection, the error message is received in both windows and linux. If the packet is not received before the server closes the connection, the error message is not received in case of windows where as in linux it is received.There have been a couple of discussion earlier also on this [1] & [2], but we could not find any alternate solution.One of the options that msdn suggests in [3] is to use SO_LINGER option, we had tried this option with no luck in solving. One other thing that we had tried was to sleep for 1 second before closing the socket, this solution works if the client is active, whereas in case of inactive clients it does not solves the problem. One other thought that we had was to simultaneously check the connection from psql, when we are waiting for query input in gets_interactive function or have a separate thread to check the connection status periodically, this might work only in case of psql but will not work for application which uses libpq. Amit had also suggested one solution in [4], where he proposed 'I have also tried calling closesocket() explicitly in our function socket_close which has changed the error message to \"could not receive data from server: Software caused connection abort (0x00002745/10053)\".'OrShould we add some documentation for the above behavior.Thoughts?[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJowQypXSKsjws9A%2BnEQDD0-mExHZqFXtJ09N209rCO5A%40mail.gmail.com#0629f079bc59ecdaa0d6ac9f8f2c18ac[2] https://www.postgresql.org/message-id/87k1iy44fd.fsf@news-spur.riddles.org.uk[3] https://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-closesocket[4] https://www.postgresql.org/message-id/CAA4eK1%2BGNyjaPK77y%2Beuh5eAgM75pncG1JYZhxYZF%2BSgS6NpjA%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 6 Dec 2019 11:23:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "closesocket behavior in different platforms"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 11:24 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> It is noticed that in all the 4 cases the message \"FATAL: terminating connection due to administrator command\" does not appear in windows.\n>\n> However the following message is present in the server log file:\n> FATAL: terminating connection due to administrator command\n>\n> The reason for this looks like:\n> When the server closes a connection, it sends the ErrorResponse packet, and then closes the socket and terminates the backend process. If the packet is received before the server closes the connection, the error message is received in both windows and linux. If the packet is not received before the server closes the connection, the error message is not received in case of windows where as in linux it is received.\n>\n> There have been a couple of discussion earlier also on this [1] & [2], but we could not find any alternate solution.\n>\n> One of the options that msdn suggests in [3] is to use SO_LINGER option, we had tried this option with no luck in solving. One other thing that we had tried was to sleep for 1 second before closing the socket, this solution works if the client is active, whereas in case of inactive clients it does not solves the problem. One other thought that we had was to simultaneously check the connection from psql, when we are waiting for query input in gets_interactive function or have a separate thread to check the connection status periodically, this might work only in case of psql but will not work for application which uses libpq. Amit had also suggested one solution in [4], where he proposed 'I have also tried calling closesocket() explicitly in our function socket_close which has changed the error message to \"could not receive data from server: Software caused connection abort (0x00002745/10053)\".'\n>\n\nBased on previous investigation and information in this email, I don't\nsee anything we can do about this.\n\n> Should we add some documentation for the above behavior.\n>\n\nThat sounds reasonable to me. Any proposal for the same? One idea\ncould be to add something like \"Client Disconnection Problems\" after\nthe \"Client Connection Problems\" section in docs [1].\n\nAnybody else has any better suggestions on this topic?\n\n\n[1] - https://www.postgresql.org/docs/devel/server-start.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Jan 2020 11:22:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: closesocket behavior in different platforms"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 6, 2019 at 11:24 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > It is noticed that in all the 4 cases the message \"FATAL: terminating connection due to administrator command\" does not appear in windows.\n> >\n> > However the following message is present in the server log file:\n> > FATAL: terminating connection due to administrator command\n> >\n> > The reason for this looks like:\n> > When the server closes a connection, it sends the ErrorResponse packet, and then closes the socket and terminates the backend process. If the packet is received before the server closes the connection, the error message is received in both windows and linux. If the packet is not received before the server closes the connection, the error message is not received in case of windows where as in linux it is received.\n> >\n> > There have been a couple of discussion earlier also on this [1] & [2], but we could not find any alternate solution.\n> >\n> > One of the options that msdn suggests in [3] is to use SO_LINGER option, we had tried this option with no luck in solving. One other thing that we had tried was to sleep for 1 second before closing the socket, this solution works if the client is active, whereas in case of inactive clients it does not solves the problem. One other thought that we had was to simultaneously check the connection from psql, when we are waiting for query input in gets_interactive function or have a separate thread to check the connection status periodically, this might work only in case of psql but will not work for application which uses libpq. Amit had also suggested one solution in [4], where he proposed 'I have also tried calling closesocket() explicitly in our function socket_close which has changed the error message to \"could not receive data from server: Software caused connection abort (0x00002745/10053)\".'\n> >\n>\n> Based on previous investigation and information in this email, I don't\n> see anything we can do about this.\n>\n> > Should we add some documentation for the above behavior.\n> >\n>\n> That sounds reasonable to me. Any proposal for the same? One idea\n> could be to add something like \"Client Disconnection Problems\" after\n> the \"Client Connection Problems\" section in docs [1].\n>\n\nThanks for your review and suggestion. I have made a patch based on\nsimilar lines. Attached patch has the doc update with the explanation.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 29 Jan 2020 16:33:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: closesocket behavior in different platforms"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 4:34 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Tue, Jan 21, 2020 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Fri, Dec 6, 2019 at 11:24 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n\n[...]\n\n>\n> Thanks for your review and suggestion. I have made a patch based on\n> similar lines. Attached patch has the doc update with the explanation.\n> Thoughts?\n>\n\nHi Vignesh,\n\nI have looked into the patch, realised that some format tagging and\nthe grammar changes are needed. Commented inline below:\n\n+\n+ <para>\n+ You will get server closed the connection unexpectedly message while\n\nThe error message is usually wrapped inside the <computeroutput> tag.\n\n+ trying to execute sql command on disconnected connection. The message\nis\n\nAlso, s/disconnected connection/the disconnected connection\n\n+ slightly different in windows and non-windows. In non-windows, you\nwill\n\ns/in windows/on <systemitem class=\"osname\">Windows</systemitem>\ns/In non-windows/On non-windows\n\n+ see a FATAL message before the error message:\n\nHow about : On non-window you'll see a fatal error as below.\n\n+<screen>\n+FATAL: terminating connection due to idle-in-transaction timeout\n+server closed the connection unexpectedly\n+ This probably means the server terminated abnormally\n+ before or while processing the request.\n+The connection to the server was lost. Attempting reset: Succeeded.\n+</screen>\n+ In windows, you might not see the FATAL message:\n\ns/In windows /On <systemitem class=\"osname\">Windows</systemitem>\ns/FATAL message/fatal error\n\n+<screen>\n+server closed the connection unexpectedly\n+ This probably means the server terminated abnormally\n+ before or while processing the request.\n+The connection to the server was lost. Attempting reset: Succeeded.\n+</screen>\n+ This message \"FATAL: terminating connection due to\nidle-in-transaction\n\nUsually <quote> for doubt-quoting is used. Here think, we should remove FATA\nL\nand wrap the error message text inside <computeroutput> tag.\n\n+ timeout\" that is sent from server will not be displayed in windows,\n\nHow about : that is sent from the server will not be displayed on windows.\n\n+ however it will be present in the log file. The reason for this is, in\n\ns/however/However\ns/in/on\n\n+ windows the client cannot receive the message sent by the server when\nthe\n\ns/windows/Windows or <systemitem class=\"osname\">Windows</systemitem>\n\n+ server has closed the client connection. This behavior can be noticed\nwhen\n+ the client connection has been disconnected because of\n+ idle_in_transaction_session_timeout, pg_terminate_backend, pg_ctl kill\n+ TERM and drop database with (force).\n\ns/idle_in_transaction_session_timeout/<xref\nlinkend=\"guc-idle-in-transaction-session-timeout\"/>\ns/pg_terminate_backend/<function>pg_terminate_backend()</function>\ns/pg_ctl kill TERM/<command>DROP DATABASE ... WITH ( FORCE )</command>\n\nRegards,\nAmul\n\nOn Wed, Jan 29, 2020 at 4:34 PM vignesh C <vignesh21@gmail.com> wrote:On Tue, Jan 21, 2020 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 6, 2019 at 11:24 AM vignesh C <vignesh21@gmail.com> wrote:\n> >[...] \nThanks for your review and suggestion. I have made a patch based on\nsimilar lines. Attached patch has the doc update with the explanation.\nThoughts? Hi Vignesh,I have looked into the patch, realised that some format tagging andthe grammar changes are needed. Commented inline below:++ <para>+ You will get server closed the connection unexpectedly message whileThe error message is usually wrapped inside the <computeroutput> tag.+ trying to execute sql command on disconnected connection. The message isAlso, s/disconnected connection/the disconnected connection+ slightly different in windows and non-windows. In non-windows, you wills/in windows/on <systemitem class=\"osname\">Windows</systemitem>s/In non-windows/On non-windows+ see a FATAL message before the error message:How about : On non-window you'll see a fatal error as below.+<screen>+FATAL: terminating connection due to idle-in-transaction timeout+server closed the connection unexpectedly+ This probably means the server terminated abnormally+ before or while processing the request.+The connection to the server was lost. Attempting reset: Succeeded.+</screen>+ In windows, you might not see the FATAL message:s/In windows /On <systemitem class=\"osname\">Windows</systemitem>s/FATAL message/fatal error+<screen>+server closed the connection unexpectedly+ This probably means the server terminated abnormally+ before or while processing the request.+The connection to the server was lost. Attempting reset: Succeeded.+</screen>+ This message \"FATAL: terminating connection due to idle-in-transactionUsually <quote> for doubt-quoting is used. Here think, we should remove FATALand wrap the error message text inside <computeroutput> tag.+ timeout\" that is sent from server will not be displayed in windows,How about : that is sent from the server will not be displayed on windows.+ however it will be present in the log file. The reason for this is, ins/however/Howevers/in/on+ windows the client cannot receive the message sent by the server when thes/windows/Windows or <systemitem class=\"osname\">Windows</systemitem>+ server has closed the client connection. This behavior can be noticed when+ the client connection has been disconnected because of+ idle_in_transaction_session_timeout, pg_terminate_backend, pg_ctl kill+ TERM and drop database with (force).s/idle_in_transaction_session_timeout/<xref linkend=\"guc-idle-in-transaction-session-timeout\"/> s/pg_terminate_backend/<function>pg_terminate_backend()</function>s/pg_ctl kill TERM/<command>DROP DATABASE ... WITH ( FORCE )</command>Regards,Amul",
"msg_date": "Wed, 29 Jan 2020 20:17:49 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: closesocket behavior in different platforms"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 6:04 AM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for your review and suggestion. I have made a patch based on\n> similar lines. Attached patch has the doc update with the explanation.\n> Thoughts?\n\nDocumenting this doesn't seem very useful to me. If we could fix the\ncode, that would be useful, but otherwise I think I'd just do nothing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 09:59:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: closesocket behavior in different platforms"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 8:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jan 29, 2020 at 6:04 AM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for your review and suggestion. I have made a patch based on\n> > similar lines. Attached patch has the doc update with the explanation.\n> > Thoughts?\n>\n> Documenting this doesn't seem very useful to me.\n>\n\nI thought of documenting it because this has been reported/discussed\nmultiple times (see some of the links of discussions at the end of the\nfirst email) and every time we need to spend time explaining the same\nthing. However, if we decide not to do that I am fine with it.\n\n> If we could fix the\n> code, that would be useful, but otherwise I think I'd just do nothing.\n>\n\nYeah, that is our first choice as well, but there doesn't seem to be a\ngood solution to it as this is a platform-specific behavior.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jan 2020 09:04:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: closesocket behavior in different platforms"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 9:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 29, 2020 at 8:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Jan 29, 2020 at 6:04 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > Thanks for your review and suggestion. I have made a patch based on\n> > > similar lines. Attached patch has the doc update with the explanation.\n> > > Thoughts?\n> >\n> > Documenting this doesn't seem very useful to me.\n> >\n>\n> I thought of documenting it because this has been reported/discussed\n> multiple times (see some of the links of discussions at the end of the\n> first email) and every time we need to spend time explaining the same\n> thing. However, if we decide not to do that I am fine with it.\n>\n\nDoes anybody else have any opinion on whether it makes sense to\ndocument this behavior? To summarize for others, the behavior\ndifference as noted by Vignesh in his patch is:\n\n+\n+ <para>\n+ You will get server closed the connection unexpectedly message while\n+ trying to execute sql command on disconnected connection. The message is\n+ slightly different in windows and non-windows. In non-windows, you will\n+ see a FATAL message before the error message:\n+<screen>\n+FATAL: terminating connection due to idle-in-transaction timeout\n+server closed the connection unexpectedly\n+ This probably means the server terminated abnormally\n+ before or while processing the request.\n+The connection to the server was lost. Attempting reset: Succeeded.\n+</screen>\n+ In windows, you might not see the FATAL message:\n+<screen>\n+server closed the connection unexpectedly\n+ This probably means the server terminated abnormally\n+ before or while processing the request.\n+The connection to the server was lost. Attempting reset: Succeeded.\n\nWe have spent a decent amount of time on this and it is due to windows\nAPI behaving differently. By documenting, we might avoid the future\neffort of explaining this to users.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Feb 2020 08:06:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: closesocket behavior in different platforms"
}
] |
[
{
"msg_contents": "Hello Hackers:\n I'm reading the code of optimizer and get confused about the 3\nfunctions. add_path/set_cheapest/get_cheapest_fractional_path\n\n\nadd_(partial_)path:\nFor every relations, optimizer will build path for it and add then call\nadd_path to the rel->pathlist. during this stage, *it compare the current\nrel->pathlist with the new one, then it may discard it or add it into the\npathlist. finally we may have multi path now. *\n\n*set_cheapest*\nafter we finished the add_path for each relation, we will call\nset_cheapest, *the cheapest is the one with the lowest total cost plus the\nparameterized path. *\n\n*get_cheapest_fractional_path*\n*after we build the paths for all the relation, it calls\n*get_cheapest_fractional_path\nto get the best path and then build the plan.\n\nso my question is why do we need to have the 3 cost compare function?\n\nand i have another real case, and hope you can provide some hints:\n\nselect * from t1 where slow_func(t1.a, 3) == true; ==> parallel bitmap\nindex scan.\n\nselect * from t1, t2 where t2.pk = 1 and t1.b = t2.b and\nslow_function(t1.a, t2.a) == true; ==> I get nest loop without parallel\nscan. what I hope is after I get the t2.a (only 1 value), I want to go\nwith parallel index scan as well.\n\nIn this case, which part should I focus on to figure out the issue?\n\n\nThanks\n\nHello Hackers: I'm reading the code of optimizer and get confused about the 3 functions. add_path/set_cheapest/get_cheapest_fractional_path add_(partial_)path:For every relations, optimizer will build path for it and add then call add_path to the rel->pathlist. during this stage, it compare the current rel->pathlist with the new one, then it may discard it or add it into the pathlist. finally we may have multi path now. set_cheapestafter we finished the add_path for each relation, we will call set_cheapest, the cheapest is the one with the lowest total cost plus the parameterized path. get_cheapest_fractional_pathafter we build the paths for all the relation, it calls get_cheapest_fractional_path to get the best path and then build the plan. so my question is why do we need to have the 3 cost compare function? and i have another real case, and hope you can provide some hints:select * from t1 where slow_func(t1.a, 3) == true; ==> parallel bitmap index scan. select * from t1, t2 where t2.pk = 1 and t1.b = t2.b and slow_function(t1.a, t2.a) == true; ==> I get nest loop without parallel scan. what I hope is after I get the t2.a (only 1 value), I want to go with parallel index scan as well. In this case, which part should I focus on to figure out the issue? Thanks",
"msg_date": "Fri, 6 Dec 2019 13:56:01 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "what is the purpose to use 3 function to compare cost\n add_path/set_cheapest/get_cheapest_fractional_path"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 11:26 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hello Hackers:\n> I'm reading the code of optimizer and get confused about the 3 functions. add_path/set_cheapest/get_cheapest_fractional_path\n>\n>\n> add_(partial_)path:\n> For every relations, optimizer will build path for it and add then call add_path to the rel->pathlist. during this stage, it compare the current rel->pathlist with the new one, then it may discard it or add it into the pathlist. finally we may have multi path now.\n>\n> set_cheapest\n> after we finished the add_path for each relation, we will call set_cheapest, the cheapest is the one with the lowest total cost plus the parameterized path.\n>\n> get_cheapest_fractional_path\n> after we build the paths for all the relation, it calls get_cheapest_fractional_path to get the best path and then build the plan.\n>\n> so my question is why do we need to have the 3 cost compare function?\n>\n> and i have another real case, and hope you can provide some hints:\n>\n> select * from t1 where slow_func(t1.a, 3) == true; ==> parallel bitmap index scan.\n>\n> select * from t1, t2 where t2.pk = 1 and t1.b = t2.b and slow_function(t1.a, t2.a) == true; ==> I get nest loop without parallel scan. what I hope is after I get the t2.a (only 1 value), I want to go with parallel index scan as well.\n>\n\nIt might be that the scan size of index is too small that we didn't\nconsider to pick parallel index scan. You might want to tweak\nmin_parallel_index_scan_size and see how it behaves. Just start by\nsetting it to 0 and see if it leads to what you want and then you can\nincrease its value and then see the behaviour. I think additionally,\nyou might want to debug build_index_paths and see the cost it\ngenerates for parallel path (the relevant code is near comment \"If\nappropriate, consider parallel index scan. ..\") as compared to the\nnon-parallel path.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Dec 2019 09:52:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: what is the purpose to use 3 function to compare cost\n add_path/set_cheapest/get_cheapest_fractional_path"
}
] |
[
{
"msg_contents": "We use system UTF-16 collation to implement UTF-8 collation on Windows. The\nPostgreSQL security team received a report, from Timothy Kuun, that this\ncollation does not uphold the \"symmetric law\" and \"transitive law\" that we\nrequire for btree operator classes. The attached test program demonstrates\nthis. http://www.delphigroups.info/2/62/478610.html quotes reports of that\nproblem going back eighteen years. Most code points are unaffected. Indexing\nan affected code point using such a collation can cause btree index scans to not\nfind a row they should find and can make a UNIQUE or PRIMARY KEY constraint\nadmit a duplicate. The security team determined that this doesn't qualify as a\nsecurity vulnerability, but it's still a bug.\n\nAll I can think to do is issue a warning whenever a CREATE DATABASE or CREATE\nCOLLATION combines UTF8 encoding with a locale having this problem. In a\ngreenfield, I would forbid affected combinations of encoding and locale. That\nis too harsh, considering the few code points affected and the difficulty of\nchanging the collation of existing databases. For CREATE DATABASE, all except\nLOCALE=C would trigger the warning. For CREATE COLLATION, ICU locales would\nalso not trigger the warning. Hence, the chief workaround is to use LOCALE=C at\nthe database level and ICU collations for indexes and operator invocations.\n(The ability to use an ICU collation at the database level would improve the\nuser experience here.) Better ideas?",
"msg_date": "Thu, 5 Dec 2019 22:34:01 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Windows UTF-8, non-ICU collation trouble"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 7:34 PM Noah Misch <noah@leadboat.com> wrote:\n> We use system UTF-16 collation to implement UTF-8 collation on Windows. The\n> PostgreSQL security team received a report, from Timothy Kuun, that this\n> collation does not uphold the \"symmetric law\" and \"transitive law\" that we\n> require for btree operator classes. The attached test program demonstrates\n> this. http://www.delphigroups.info/2/62/478610.html quotes reports of that\n> problem going back eighteen years. Most code points are unaffected. Indexing\n> an affected code point using such a collation can cause btree index scans to not\n> find a row they should find and can make a UNIQUE or PRIMARY KEY constraint\n> admit a duplicate. The security team determined that this doesn't qualify as a\n> security vulnerability, but it's still a bug.\n\nHuh. Does this apply in modern times? Since Windows 10, I thought\nthey adopted[1] CLDR data to drive that, the same definitions used (or\nsomewhere in the process of being adopted by) GNU, Illumos, FreeBSD\netc. Basically, everyone gave up on trying to own this rats nest of a\nproblem and deferred to the experts. If you can still get\nindex-busting behaviour out of modern Windows collations, wouldn't\nthat be a bug that someone can file against SQL Server, Windows etc\nand get fixed?\n\n[1] https://blogs.msdn.microsoft.com/shawnste/2015/08/29/locale-data-in-windows-10-cldr/\n\n\n",
"msg_date": "Fri, 6 Dec 2019 19:56:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows UTF-8, non-ICU collation trouble"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 07:56:08PM +1300, Thomas Munro wrote:\n> On Fri, Dec 6, 2019 at 7:34 PM Noah Misch <noah@leadboat.com> wrote:\n> > We use system UTF-16 collation to implement UTF-8 collation on Windows. The\n> > PostgreSQL security team received a report, from Timothy Kuun, that this\n> > collation does not uphold the \"symmetric law\" and \"transitive law\" that we\n> > require for btree operator classes. The attached test program demonstrates\n> > this. http://www.delphigroups.info/2/62/478610.html quotes reports of that\n> > problem going back eighteen years. Most code points are unaffected. Indexing\n> > an affected code point using such a collation can cause btree index scans to not\n> > find a row they should find and can make a UNIQUE or PRIMARY KEY constraint\n> > admit a duplicate. The security team determined that this doesn't qualify as a\n> > security vulnerability, but it's still a bug.\n> \n> Huh. Does this apply in modern times? Since Windows 10, I thought\n> they adopted[1] CLDR data to drive that, the same definitions used (or\n> somewhere in the process of being adopted by) GNU, Illumos, FreeBSD\n> etc. Basically, everyone gave up on trying to own this rats nest of a\n> problem and deferred to the experts.\n\nBased on my test program, it applies to Windows Server 2016. I didn't test\nnewer versions.\n\n> If you can still get\n> index-busting behaviour out of modern Windows collations, wouldn't\n> that be a bug that someone can file against SQL Server, Windows etc\n> and get fixed?\n\nPerhaps. I wouldn't have high hopes, given the behavior's long tenure and the\nrisk of breaking a different set of applications.\n\n\n",
"msg_date": "Thu, 5 Dec 2019 23:33:49 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows UTF-8, non-ICU collation trouble"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 8:33 PM Noah Misch <noah@leadboat.com> wrote:\n> On Fri, Dec 06, 2019 at 07:56:08PM +1300, Thomas Munro wrote:\n> > On Fri, Dec 6, 2019 at 7:34 PM Noah Misch <noah@leadboat.com> wrote:\n> > > We use system UTF-16 collation to implement UTF-8 collation on Windows. The\n> > > PostgreSQL security team received a report, from Timothy Kuun, that this\n> > > collation does not uphold the \"symmetric law\" and \"transitive law\" that we\n> > > require for btree operator classes. The attached test program demonstrates\n> > > this. http://www.delphigroups.info/2/62/478610.html quotes reports of that\n> > > problem going back eighteen years. Most code points are unaffected. Indexing\n> > > an affected code point using such a collation can cause btree index scans to not\n> > > find a row they should find and can make a UNIQUE or PRIMARY KEY constraint\n> > > admit a duplicate. The security team determined that this doesn't qualify as a\n> > > security vulnerability, but it's still a bug.\n> >\n> > Huh. Does this apply in modern times? Since Windows 10, I thought\n> > they adopted[1] CLDR data to drive that, the same definitions used (or\n> > somewhere in the process of being adopted by) GNU, Illumos, FreeBSD\n> > etc. Basically, everyone gave up on trying to own this rats nest of a\n> > problem and deferred to the experts.\n>\n> Based on my test program, it applies to Windows Server 2016. I didn't test\n> newer versions.\n\nI ran a variation of your program on Appveyor's Studio/Server 2019\nimage, and the result was the same: it thinks that cmp(s1, s2) == 0,\ncmp(s2, s3) == 0, but cmp(s1, s3) == 1, so the operator fails to be\ntransitive.\n\nThese strings include combining characters probably used in a weird\nway[1][2], and they confuse my terminal software. Although they don't\nall agree, the 3 other implementations I tried gave self-consistent\nanswers. All of the below expressions are true:\n\ncreate table t as select e'\\u11a7\\u1188\\ud7a2' s1,\ne'\\u11a7\\ud7a2\\u1188' s2, e'\\ud7a2\\u11a7\\u1188' s3;\nselect s1 > s2 collate \"ko-x-icu\", s2 > s3 collate \"ko-x-icu\", s1 > s3\ncollate \"ko-x-icu\" from t; [ICU]\nselect s1 > s2 collate \"ko_KR.UTF-8\", s2 > s3 collate \"ko_KR.UTF-8\",\ns1 > s3 collate \"ko_KR.UTF-8\" from t; [FreeBSD]\nselect s1 < s2 collate \"ko_KR.utf8\", s2 < s3 collate \"ko_KR.utf8\", s1\n< s3 collate \"ko_KR.utf8\" from t; [GNU]\n\n> > If you can still get\n> > index-busting behaviour out of modern Windows collations, wouldn't\n> > that be a bug that someone can file against SQL Server, Windows etc\n> > and get fixed?\n>\n> Perhaps. I wouldn't have high hopes, given the behavior's long tenure and the\n> risk of breaking a different set of applications.\n\nI found a SQL Server test website[3] and tried to get it to do\nsomething strange, using \"Windows\" collations (the ones that are\nsupposed to be compatible with CompareString() AKA strcoll(), much\nlike our \"libc\" provider). For Latin1_General_100_CI_AS_SC_UTF8 and\nKorean_100_CS_AS it insisted that cmp(s1, s2) == 1, cmp(s2, s3) == 0,\ncmp(s1, s3) == 1, while for Korean_90_CS_AS it said -1, 0, -1, all\nself-consistent answers, matching neither your results nor the results\nof other implementations. Perhaps my query is wrong, or perhaps I\ndon't understand how to find the right collation name to break\nsomething, or perhaps it does some kind of normalisation or other\ntransformation on the strings. Clearly I failed to access the same\ncode that your sort-locale.c test reaches.\n\ncreate table t (s1 nvarchar(10) collate Korean_100_CS_AS,\n s2 nvarchar(10) collate Korean_100_CS_AS,\n s3 nvarchar(10) collate Korean_100_CS_AS);\n\ninsert into t values (concat(nchar(0x11a7), nchar(0x1188), nchar(0xd7a2)),\n concat(nchar(0x11a7), nchar(0xd7a2), nchar(0x1188)),\n concat(nchar(0xd7a2), nchar(0x11a7), nchar(0x1188)));\n\nselect case when s1 < s2 then -1 when s1 > s2 then 1 else 0 end,\n case when s2 < s3 then -1 when s2 > s3 then 1 else 0 end,\n case when s1 < s3 then -1 when s1 > s3 then 1 else 0 end\n from t;\n\n[1] https://en.wikipedia.org/wiki/List_of_Hangul_jamo\n[2] https://www.unicode.org/faq/korean.html\n[3] https://sqltest.net/#927795\n\n\n",
"msg_date": "Tue, 10 Dec 2019 15:41:15 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows UTF-8, non-ICU collation trouble"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 03:41:15PM +1300, Thomas Munro wrote:\n> On Fri, Dec 6, 2019 at 8:33 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Fri, Dec 06, 2019 at 07:56:08PM +1300, Thomas Munro wrote:\n> > > On Fri, Dec 6, 2019 at 7:34 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > We use system UTF-16 collation to implement UTF-8 collation on Windows. The\n> > > > PostgreSQL security team received a report, from Timothy Kuun, that this\n> > > > collation does not uphold the \"symmetric law\" and \"transitive law\" that we\n> > > > require for btree operator classes. The attached test program demonstrates\n> > > > this. http://www.delphigroups.info/2/62/478610.html quotes reports of that\n> > > > problem going back eighteen years. Most code points are unaffected. Indexing\n> > > > an affected code point using such a collation can cause btree index scans to not\n> > > > find a row they should find and can make a UNIQUE or PRIMARY KEY constraint\n> > > > admit a duplicate. The security team determined that this doesn't qualify as a\n> > > > security vulnerability, but it's still a bug.\n> > >\n> > > Huh. Does this apply in modern times? Since Windows 10, I thought\n> > > they adopted[1] CLDR data to drive that, the same definitions used (or\n> > > somewhere in the process of being adopted by) GNU, Illumos, FreeBSD\n> > > etc. Basically, everyone gave up on trying to own this rats nest of a\n> > > problem and deferred to the experts.\n> >\n> > Based on my test program, it applies to Windows Server 2016. I didn't test\n> > newer versions.\n> \n> I ran a variation of your program on Appveyor's Studio/Server 2019\n> image, and the result was the same: it thinks that cmp(s1, s2) == 0,\n> cmp(s2, s3) == 0, but cmp(s1, s3) == 1, so the operator fails to be\n> transitive.\n\nIf that test is captured in self-contained artifacts (a few config files, a\npublic git repository, etc.), could you share them? If not, no need to\nassemble such artifacts. I probably won't use them, but I'd be curious to\nbrowse them if you've already assembled them.\n\n> > > If you can still get\n> > > index-busting behaviour out of modern Windows collations, wouldn't\n> > > that be a bug that someone can file against SQL Server, Windows etc\n> > > and get fixed?\n> >\n> > Perhaps. I wouldn't have high hopes, given the behavior's long tenure and the\n> > risk of breaking a different set of applications.\n> \n> I found a SQL Server test website[3] and tried to get it to do\n> something strange, using \"Windows\" collations (the ones that are\n> supposed to be compatible with CompareString() AKA strcoll(), much\n> like our \"libc\" provider). For Latin1_General_100_CI_AS_SC_UTF8 and\n> Korean_100_CS_AS it insisted that cmp(s1, s2) == 1, cmp(s2, s3) == 0,\n> cmp(s1, s3) == 1, while for Korean_90_CS_AS it said -1, 0, -1, all\n> self-consistent answers, matching neither your results nor the results\n> of other implementations.\n\nThis does suggest some set of CompareString* parameters is free from the\nproblem. If that's right, we could offer collations based on that. (I'm not\nsure it would be worth offering; ICU may be enough.)\n\nThanks for this extensive testing.\n\n\n",
"msg_date": "Tue, 10 Dec 2019 01:29:19 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows UTF-8, non-ICU collation trouble"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 10:29 PM Noah Misch <noah@leadboat.com> wrote:\n> On Tue, Dec 10, 2019 at 03:41:15PM +1300, Thomas Munro wrote:\n> > I ran a variation of your program on Appveyor's Studio/Server 2019\n> > image, and the result was the same: it thinks that cmp(s1, s2) == 0,\n> > cmp(s2, s3) == 0, but cmp(s1, s3) == 1, so the operator fails to be\n> > transitive.\n>\n> If that test is captured in self-contained artifacts (a few config files, a\n> public git repository, etc.), could you share them? If not, no need to\n> assemble such artifacts. I probably won't use them, but I'd be curious to\n> browse them if you've already assembled them.\n\nhttps://ci.appveyor.com/project/macdice/locale-sort\nhttps://github.com/macdice/locale-sort\n\nTo understand which operating systems the images mentioned in\nappveyor.yml correspond to:\n\nhttps://www.appveyor.com/docs/windows-images-software/\n\n> This does suggest some set of CompareString* parameters is free from the\n> problem. If that's right, we could offer collations based on that. (I'm not\n> sure it would be worth offering; ICU may be enough.)\n\nIt would be nice to get to the bottom of that (for example, what is\nthe relationship between names like \"Korean_XXX\" and names like\n\"ko-KR\"?), but I'm unlikely to investigate further (I have enough\ntrouble getting N kinds of Unix to do what I want). Generally I like\nthe idea of continuing to support and recommend both operating system\nand ICU locales for different use cases. It should be easy to get all\nthe software on your system to agree on ordering, which seems like a\nthing you should want as an application designer. The lack of\nversioning is not a problem on Windows (see\nhttps://commitfest.postgresql.org/26/2351/).\n\n\n",
"msg_date": "Wed, 11 Dec 2019 13:54:47 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows UTF-8, non-ICU collation trouble"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 01:54:47PM +1300, Thomas Munro wrote:\n> On Tue, Dec 10, 2019 at 10:29 PM Noah Misch <noah@leadboat.com> wrote:\n> > This does suggest some set of CompareString* parameters is free from the\n> > problem. If that's right, we could offer collations based on that. (I'm not\n> > sure it would be worth offering; ICU may be enough.)\n> \n> It would be nice to get to the bottom of that (for example, what is\n> the relationship between names like \"Korean_XXX\" and names like\n> \"ko-KR\"?), but I'm unlikely to investigate further (I have enough\n> trouble getting N kinds of Unix to do what I want). Generally I like\n> the idea of continuing to support and recommend both operating system\n> and ICU locales for different use cases. It should be easy to get all\n> the software on your system to agree on ordering, which seems like a\n> thing you should want as an application designer. The lack of\n> versioning is not a problem on Windows (see\n> https://commitfest.postgresql.org/26/2351/).\n\nI explored this further in a Windows-specific forum:\nhttps://social.msdn.microsoft.com/Forums/sqlserver/en-US/7682927d-d0f3-4278-a0cc-3fefc2ca4b65\n\nKey points relevant to PostgreSQL:\n- SQL Server contains its own snapshot of string collation data, so its\n sorting is independent of Windows APIs on the machine hosting SQL Server.\n The source of the snapshot is long-past Windows API behavior.\n- Passing older NLS version values to Windows string comparison functions does\n change collation of these particular test strings. On my Windows Server\n 2016 system, two older NLS versions were available, both of which exhibited\n transitivity for these strings.\n\nLooking at my original proposal in that light:\n\nOn Thu, Dec 05, 2019 at 10:34:01PM -0800, Noah Misch wrote:\n> All I can think to do is issue a warning whenever a CREATE DATABASE or CREATE\n> COLLATION combines UTF8 encoding with a locale having this problem. In a\n> greenfield, I would forbid affected combinations of encoding and locale. That\n> is too harsh, considering the few code points affected and the difficulty of\n> changing the collation of existing databases.\n\nThat's still the best I can think to do.\n\n> For CREATE DATABASE, all except\n> LOCALE=C would trigger the warning. For CREATE COLLATION, ICU locales would\n> also not trigger the warning. Hence, the chief workaround is to use LOCALE=C at\n> the database level and ICU collations for indexes and operator invocations.\n\nWhile that is true for Windows Server 2016, it looks like Windows 7 and\nWindows Server 2003 wouldn't trigger the warning. (That's of rapidly decaying\ninterest, of course.) If someone took your \"Collation versions on Windows\"\npatch a step further and allowed \"CREATE COLLATION x (LOCALE = 'de-DE',\nVERSION = 'a.b,c.d')\", one could specify the use of Windows 7 collation order\non Windows Server 2016, removing the problem (and stopping the warning).\n\n\n",
"msg_date": "Sat, 11 Jan 2020 16:23:58 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Windows UTF-8, non-ICU collation trouble"
}
] |
[
{
"msg_contents": "Hello.\n\nThis is the reased (and a bit fixed) version of the patch. This\napplies on the master HEAD and passes all provided tests.\n\nI took over this work from Sawada-san. I'll begin with reviewing the\ncurrent patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 06 Dec 2019 17:32:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 6 Dec 2019 at 17:33, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> This is the reased (and a bit fixed) version of the patch. This\n> applies on the master HEAD and passes all provided tests.\n>\n> I took over this work from Sawada-san. I'll begin with reviewing the\n> current patch.\n>\n\nThe previous patch set is no longer applied cleanly to the current\nHEAD. I've updated and slightly modified the codes.\n\nThis patch set has been marked as Waiting on Author for a long time\nbut the correct status now is Needs Review. The patch actually was\nupdated and incorporated all review comments but they was not rebased\nactively.\n\nThe mail[1] I posted before would be helpful to understand the current\npatch design and there are README in the patch and a wiki page[2].\n\nI've marked this as Needs Review.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDn98axH1bEoMnte%2BS7WWR%3DnsmOpjz1WGH-NvJi4aLu3Q%40mail.gmail.com\n[2] https://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 24 Jan 2020 15:00:50 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 11:31 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Fri, 6 Dec 2019 at 17:33, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> >\n> > Hello.\n> >\n> > This is the reased (and a bit fixed) version of the patch. This\n> > applies on the master HEAD and passes all provided tests.\n> >\n> > I took over this work from Sawada-san. I'll begin with reviewing the\n> > current patch.\n> >\n>\n> The previous patch set is no longer applied cleanly to the current\n> HEAD. I've updated and slightly modified the codes.\n>\n> This patch set has been marked as Waiting on Author for a long time\n> but the correct status now is Needs Review. The patch actually was\n> updated and incorporated all review comments but they was not rebased\n> actively.\n>\n> The mail[1] I posted before would be helpful to understand the current\n> patch design and there are README in the patch and a wiki page[2].\n>\n> I've marked this as Needs Review.\n>\n>\nHi Sawada san,\n\nI just had a quick look to 0001 and 0002 patch here is the few suggestions.\n\npatch: v27-0001:\n\nTypo: s/non-temprary/non-temporary\n----\n\npatch: v27-0002: (Note:The left-hand number is the line number in the\nv27-0002 patch):\n\n 138 +PostgreSQL's the global transaction manager (GTM), as a distributed\ntransaction\n 139 +participant The registered foreign transactions are tracked until the\nend of\n\nFull stop \".\" is missing after \"participant\"\n\n\n174 +API Contract With Transaction Management Callback Functions\n\nCan we just say \"Transaction Management Callback Functions\";\nTOBH, I am not sure that I understand this title.\n\n\n 203 +processing foreign transaction (i.g. preparing, committing or\naborting) the\n\nDo you mean \"i.e\" instead of i.g. ?\n\n\n269 + * RollbackForeignTransactionAPI. Registered participant servers are\nidentified\n\nAdd space before between RollbackForeignTransaction and API.\n\n\n 292 + * automatically so must be processed manually using by\npg_resovle_fdwxact()\n\nDo you mean pg_resolve_foreign_xact() here?\n\n\n 320 + * the foreign transaction is authorized to update the fields from\nits own\n 321 + * one.\n 322 +\n 323 + * Therefore, before doing PREPARE, COMMIT PREPARED or ROLLBACK\nPREPARED a\n\nPlease add asterisk '*' on line#322.\n\n\n 816 +static void\n 817 +FdwXactPrepareForeignTransactions(void)\n 818 +{\n 819 + ListCell *lcell;\n\nLet's have this variable name as \"lc\" like elsewhere.\n\n\n1036 + ereport(ERROR, (errmsg(\"could not insert a foreign\ntransaction entry\"),\n1037 + errdetail(\"duplicate entry with\ntransaction id %u, serverid %u, userid %u\",\n1038 + xid, serverid, userid)));\n1039 + }\n\nIncorrect formatting.\n\n\n1166 +/*\n1167 + * Return true and set FdwXactAtomicCommitReady to true if the\ncurrent transaction\n\nDo you mean ForeignTwophaseCommitIsRequired instead of\nFdwXactAtomicCommitReady?\n\n\n3529 +\n3530 +/*\n3531 + * FdwXactLauncherRegister\n3532 + * Register a background worker running the foreign transaction\n3533 + * launcher.\n3534 + */\n\nThis prolog style is not consistent with the other function in the file.\n\n\nAnd here are the few typos:\n\ns/conssitent/consistent\ns/consisnts/consist\ns/Foriegn/Foreign\ns/tranascation/transaction\ns/itselft/itself\ns/rolbacked/rollbacked\ns/trasaction/transaction\ns/transactio/transaction\ns/automically/automatically\ns/CommitForeignTransaciton/CommitForeignTransaction\ns/Similary/Similarly\ns/FDWACT_/FDWXACT_\ns/dink/disk\ns/requried/required\ns/trasactions/transactions\ns/prepread/prepared\ns/preapred/prepared\ns/beging/being\ns/gxact/xact\ns/in-dbout/in-doubt\ns/respecitively/respectively\ns/transction/transaction\ns/idenetifier/identifier\ns/identifer/identifier\ns/checkpoint'S/checkpoint's\ns/fo/of\ns/transcation/transaction\ns/trasanction/transaction\ns/non-temprary/non-temporary\ns/resovler_internal.h/resolver_internal.h\n\n\nRegards,\nAmul\n\nOn Fri, Jan 24, 2020 at 11:31 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Fri, 6 Dec 2019 at 17:33, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> This is the reased (and a bit fixed) version of the patch. This\n> applies on the master HEAD and passes all provided tests.\n>\n> I took over this work from Sawada-san. I'll begin with reviewing the\n> current patch.\n>\n\nThe previous patch set is no longer applied cleanly to the current\nHEAD. I've updated and slightly modified the codes.\n\nThis patch set has been marked as Waiting on Author for a long time\nbut the correct status now is Needs Review. The patch actually was\nupdated and incorporated all review comments but they was not rebased\nactively.\n\nThe mail[1] I posted before would be helpful to understand the current\npatch design and there are README in the patch and a wiki page[2].\n\nI've marked this as Needs Review.\nHi Sawada san,I just had a quick look to 0001 and 0002 patch here is the few suggestions.patch: v27-0001:Typo: s/non-temprary/non-temporary----patch: v27-0002: (Note:The left-hand number is the line number in the v27-0002 patch): 138 +PostgreSQL's the global transaction manager (GTM), as a distributed transaction 139 +participant The registered foreign transactions are tracked until the end ofFull stop \".\" is missing after \"participant\"174 +API Contract With Transaction Management Callback FunctionsCan we just say \"Transaction Management Callback Functions\"; TOBH, I am not sure that I understand this title. 203 +processing foreign transaction (i.g. preparing, committing or aborting) theDo you mean \"i.e\" instead of i.g. ? 269 + * RollbackForeignTransactionAPI. Registered participant servers are identifiedAdd space before between RollbackForeignTransaction and API. 292 + * automatically so must be processed manually using by pg_resovle_fdwxact()Do you mean pg_resolve_foreign_xact() here? 320 + * the foreign transaction is authorized to update the fields from its own 321 + * one. 322 + 323 + * Therefore, before doing PREPARE, COMMIT PREPARED or ROLLBACK PREPARED aPlease add asterisk '*' on line#322. 816 +static void 817 +FdwXactPrepareForeignTransactions(void) 818 +{ 819 + ListCell *lcell;Let's have this variable name as \"lc\" like elsewhere.1036 + ereport(ERROR, (errmsg(\"could not insert a foreign transaction entry\"),1037 + errdetail(\"duplicate entry with transaction id %u, serverid %u, userid %u\",1038 + xid, serverid, userid)));1039 + }Incorrect formatting. 1166 +/*1167 + * Return true and set FdwXactAtomicCommitReady to true if the current transactionDo you mean ForeignTwophaseCommitIsRequired instead of FdwXactAtomicCommitReady?3529 +3530 +/*3531 + * FdwXactLauncherRegister3532 + * Register a background worker running the foreign transaction3533 + * launcher.3534 + */This prolog style is not consistent with the other function in the file.And here are the few typos:s/conssitent/consistents/consisnts/consist s/Foriegn/Foreigns/tranascation/transactions/itselft/itselfs/rolbacked/rollbackeds/trasaction/transactions/transactio/transactions/automically/automaticallys/CommitForeignTransaciton/CommitForeignTransactions/Similary/Similarlys/FDWACT_/FDWXACT_s/dink/disks/requried/requireds/trasactions/transactionss/prepread/prepareds/preapred/prepareds/beging/beings/gxact/xacts/in-dbout/in-doubts/respecitively/respectively s/transction/transactions/idenetifier/identifier s/identifer/identifier s/checkpoint'S/checkpoint'ss/fo/ofs/transcation/transactions/trasanction/transactions/non-temprary/non-temporarys/resovler_internal.h/resolver_internal.hRegards,Amul",
"msg_date": "Tue, 11 Feb 2020 09:11:27 +0530",
"msg_from": "amul sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 11 Feb 2020 at 12:42, amul sul <sulamul@gmail.com> wrote:\n>\n> On Fri, Jan 24, 2020 at 11:31 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 6 Dec 2019 at 17:33, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> >\n>> > Hello.\n>> >\n>> > This is the reased (and a bit fixed) version of the patch. This\n>> > applies on the master HEAD and passes all provided tests.\n>> >\n>> > I took over this work from Sawada-san. I'll begin with reviewing the\n>> > current patch.\n>> >\n>>\n>> The previous patch set is no longer applied cleanly to the current\n>> HEAD. I've updated and slightly modified the codes.\n>>\n>> This patch set has been marked as Waiting on Author for a long time\n>> but the correct status now is Needs Review. The patch actually was\n>> updated and incorporated all review comments but they was not rebased\n>> actively.\n>>\n>> The mail[1] I posted before would be helpful to understand the current\n>> patch design and there are README in the patch and a wiki page[2].\n>>\n>> I've marked this as Needs Review.\n>>\n>\n> Hi Sawada san,\n>\n> I just had a quick look to 0001 and 0002 patch here is the few suggestions.\n>\n> patch: v27-0001:\n>\n> Typo: s/non-temprary/non-temporary\n> ----\n>\n> patch: v27-0002: (Note:The left-hand number is the line number in the v27-0002 patch):\n>\n> 138 +PostgreSQL's the global transaction manager (GTM), as a distributed transaction\n> 139 +participant The registered foreign transactions are tracked until the end of\n>\n> Full stop \".\" is missing after \"participant\"\n>\n>\n> 174 +API Contract With Transaction Management Callback Functions\n>\n> Can we just say \"Transaction Management Callback Functions\";\n> TOBH, I am not sure that I understand this title.\n>\n>\n> 203 +processing foreign transaction (i.g. preparing, committing or aborting) the\n>\n> Do you mean \"i.e\" instead of i.g. ?\n>\n>\n> 269 + * RollbackForeignTransactionAPI. Registered participant servers are identified\n>\n> Add space before between RollbackForeignTransaction and API.\n>\n>\n> 292 + * automatically so must be processed manually using by pg_resovle_fdwxact()\n>\n> Do you mean pg_resolve_foreign_xact() here?\n>\n>\n> 320 + * the foreign transaction is authorized to update the fields from its own\n> 321 + * one.\n> 322 +\n> 323 + * Therefore, before doing PREPARE, COMMIT PREPARED or ROLLBACK PREPARED a\n>\n> Please add asterisk '*' on line#322.\n>\n>\n> 816 +static void\n> 817 +FdwXactPrepareForeignTransactions(void)\n> 818 +{\n> 819 + ListCell *lcell;\n>\n> Let's have this variable name as \"lc\" like elsewhere.\n>\n>\n> 1036 + ereport(ERROR, (errmsg(\"could not insert a foreign transaction entry\"),\n> 1037 + errdetail(\"duplicate entry with transaction id %u, serverid %u, userid %u\",\n> 1038 + xid, serverid, userid)));\n> 1039 + }\n>\n> Incorrect formatting.\n>\n>\n> 1166 +/*\n> 1167 + * Return true and set FdwXactAtomicCommitReady to true if the current transaction\n>\n> Do you mean ForeignTwophaseCommitIsRequired instead of FdwXactAtomicCommitReady?\n>\n>\n> 3529 +\n> 3530 +/*\n> 3531 + * FdwXactLauncherRegister\n> 3532 + * Register a background worker running the foreign transaction\n> 3533 + * launcher.\n> 3534 + */\n>\n> This prolog style is not consistent with the other function in the file.\n>\n>\n> And here are the few typos:\n>\n> s/conssitent/consistent\n> s/consisnts/consist\n> s/Foriegn/Foreign\n> s/tranascation/transaction\n> s/itselft/itself\n> s/rolbacked/rollbacked\n> s/trasaction/transaction\n> s/transactio/transaction\n> s/automically/automatically\n> s/CommitForeignTransaciton/CommitForeignTransaction\n> s/Similary/Similarly\n> s/FDWACT_/FDWXACT_\n> s/dink/disk\n> s/requried/required\n> s/trasactions/transactions\n> s/prepread/prepared\n> s/preapred/prepared\n> s/beging/being\n> s/gxact/xact\n> s/in-dbout/in-doubt\n> s/respecitively/respectively\n> s/transction/transaction\n> s/idenetifier/identifier\n> s/identifer/identifier\n> s/checkpoint'S/checkpoint's\n> s/fo/of\n> s/transcation/transaction\n> s/trasanction/transaction\n> s/non-temprary/non-temporary\n> s/resovler_internal.h/resolver_internal.h\n>\n>\n\nThank you for reviewing the patch! I've incorporated all comments in\nlocal branch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Feb 2020 07:55:39 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, 19 Feb 2020 at 07:55, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 11 Feb 2020 at 12:42, amul sul <sulamul@gmail.com> wrote:\n> >\n> > On Fri, Jan 24, 2020 at 11:31 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Fri, 6 Dec 2019 at 17:33, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >> >\n> >> > Hello.\n> >> >\n> >> > This is the reased (and a bit fixed) version of the patch. This\n> >> > applies on the master HEAD and passes all provided tests.\n> >> >\n> >> > I took over this work from Sawada-san. I'll begin with reviewing the\n> >> > current patch.\n> >> >\n> >>\n> >> The previous patch set is no longer applied cleanly to the current\n> >> HEAD. I've updated and slightly modified the codes.\n> >>\n> >> This patch set has been marked as Waiting on Author for a long time\n> >> but the correct status now is Needs Review. The patch actually was\n> >> updated and incorporated all review comments but they was not rebased\n> >> actively.\n> >>\n> >> The mail[1] I posted before would be helpful to understand the current\n> >> patch design and there are README in the patch and a wiki page[2].\n> >>\n> >> I've marked this as Needs Review.\n> >>\n> >\n> > Hi Sawada san,\n> >\n> > I just had a quick look to 0001 and 0002 patch here is the few suggestions.\n> >\n> > patch: v27-0001:\n> >\n> > Typo: s/non-temprary/non-temporary\n> > ----\n> >\n> > patch: v27-0002: (Note:The left-hand number is the line number in the v27-0002 patch):\n> >\n> > 138 +PostgreSQL's the global transaction manager (GTM), as a distributed transaction\n> > 139 +participant The registered foreign transactions are tracked until the end of\n> >\n> > Full stop \".\" is missing after \"participant\"\n> >\n> >\n> > 174 +API Contract With Transaction Management Callback Functions\n> >\n> > Can we just say \"Transaction Management Callback Functions\";\n> > TOBH, I am not sure that I understand this title.\n> >\n> >\n> > 203 +processing foreign transaction (i.g. preparing, committing or aborting) the\n> >\n> > Do you mean \"i.e\" instead of i.g. ?\n> >\n> >\n> > 269 + * RollbackForeignTransactionAPI. Registered participant servers are identified\n> >\n> > Add space before between RollbackForeignTransaction and API.\n> >\n> >\n> > 292 + * automatically so must be processed manually using by pg_resovle_fdwxact()\n> >\n> > Do you mean pg_resolve_foreign_xact() here?\n> >\n> >\n> > 320 + * the foreign transaction is authorized to update the fields from its own\n> > 321 + * one.\n> > 322 +\n> > 323 + * Therefore, before doing PREPARE, COMMIT PREPARED or ROLLBACK PREPARED a\n> >\n> > Please add asterisk '*' on line#322.\n> >\n> >\n> > 816 +static void\n> > 817 +FdwXactPrepareForeignTransactions(void)\n> > 818 +{\n> > 819 + ListCell *lcell;\n> >\n> > Let's have this variable name as \"lc\" like elsewhere.\n> >\n> >\n> > 1036 + ereport(ERROR, (errmsg(\"could not insert a foreign transaction entry\"),\n> > 1037 + errdetail(\"duplicate entry with transaction id %u, serverid %u, userid %u\",\n> > 1038 + xid, serverid, userid)));\n> > 1039 + }\n> >\n> > Incorrect formatting.\n> >\n> >\n> > 1166 +/*\n> > 1167 + * Return true and set FdwXactAtomicCommitReady to true if the current transaction\n> >\n> > Do you mean ForeignTwophaseCommitIsRequired instead of FdwXactAtomicCommitReady?\n> >\n> >\n> > 3529 +\n> > 3530 +/*\n> > 3531 + * FdwXactLauncherRegister\n> > 3532 + * Register a background worker running the foreign transaction\n> > 3533 + * launcher.\n> > 3534 + */\n> >\n> > This prolog style is not consistent with the other function in the file.\n> >\n> >\n> > And here are the few typos:\n> >\n> > s/conssitent/consistent\n> > s/consisnts/consist\n> > s/Foriegn/Foreign\n> > s/tranascation/transaction\n> > s/itselft/itself\n> > s/rolbacked/rollbacked\n> > s/trasaction/transaction\n> > s/transactio/transaction\n> > s/automically/automatically\n> > s/CommitForeignTransaciton/CommitForeignTransaction\n> > s/Similary/Similarly\n> > s/FDWACT_/FDWXACT_\n> > s/dink/disk\n> > s/requried/required\n> > s/trasactions/transactions\n> > s/prepread/prepared\n> > s/preapred/prepared\n> > s/beging/being\n> > s/gxact/xact\n> > s/in-dbout/in-doubt\n> > s/respecitively/respectively\n> > s/transction/transaction\n> > s/idenetifier/identifier\n> > s/identifer/identifier\n> > s/checkpoint'S/checkpoint's\n> > s/fo/of\n> > s/transcation/transaction\n> > s/trasanction/transaction\n> > s/non-temprary/non-temporary\n> > s/resovler_internal.h/resolver_internal.h\n> >\n> >\n>\n> Thank you for reviewing the patch! I've incorporated all comments in\n> local branch.\n\nAttached the updated version patch sets that incorporated review\ncomments from Amul and Muhammad.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 22 Feb 2020 11:14:58 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sat, Feb 22, 2020 at 7:15 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Wed, 19 Feb 2020 at 07:55, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 11 Feb 2020 at 12:42, amul sul <sulamul@gmail.com> wrote:\n> > >\n> > > On Fri, Jan 24, 2020 at 11:31 AM Masahiko Sawada <\n> masahiko.sawada@2ndquadrant.com> wrote:\n> > >>\n> > >> On Fri, 6 Dec 2019 at 17:33, Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote:\n> > >> >\n> > >> > Hello.\n> > >> >\n> > >> > This is the reased (and a bit fixed) version of the patch. This\n> > >> > applies on the master HEAD and passes all provided tests.\n> > >> >\n> > >> > I took over this work from Sawada-san. I'll begin with reviewing the\n> > >> > current patch.\n> > >> >\n> > >>\n> > >> The previous patch set is no longer applied cleanly to the current\n> > >> HEAD. I've updated and slightly modified the codes.\n> > >>\n> > >> This patch set has been marked as Waiting on Author for a long time\n> > >> but the correct status now is Needs Review. The patch actually was\n> > >> updated and incorporated all review comments but they was not rebased\n> > >> actively.\n> > >>\n> > >> The mail[1] I posted before would be helpful to understand the current\n> > >> patch design and there are README in the patch and a wiki page[2].\n> > >>\n> > >> I've marked this as Needs Review.\n> > >>\n> > >\n> > > Hi Sawada san,\n> > >\n> > > I just had a quick look to 0001 and 0002 patch here is the few\n> suggestions.\n> > >\n> > > patch: v27-0001:\n> > >\n> > > Typo: s/non-temprary/non-temporary\n> > > ----\n> > >\n> > > patch: v27-0002: (Note:The left-hand number is the line number in the\n> v27-0002 patch):\n> > >\n> > > 138 +PostgreSQL's the global transaction manager (GTM), as a\n> distributed transaction\n> > > 139 +participant The registered foreign transactions are tracked\n> until the end of\n> > >\n> > > Full stop \".\" is missing after \"participant\"\n> > >\n> > >\n> > > 174 +API Contract With Transaction Management Callback Functions\n> > >\n> > > Can we just say \"Transaction Management Callback Functions\";\n> > > TOBH, I am not sure that I understand this title.\n> > >\n> > >\n> > > 203 +processing foreign transaction (i.g. preparing, committing or\n> aborting) the\n> > >\n> > > Do you mean \"i.e\" instead of i.g. ?\n> > >\n> > >\n> > > 269 + * RollbackForeignTransactionAPI. Registered participant servers\n> are identified\n> > >\n> > > Add space before between RollbackForeignTransaction and API.\n> > >\n> > >\n> > > 292 + * automatically so must be processed manually using by\n> pg_resovle_fdwxact()\n> > >\n> > > Do you mean pg_resolve_foreign_xact() here?\n> > >\n> > >\n> > > 320 + * the foreign transaction is authorized to update the fields\n> from its own\n> > > 321 + * one.\n> > > 322 +\n> > > 323 + * Therefore, before doing PREPARE, COMMIT PREPARED or ROLLBACK\n> PREPARED a\n> > >\n> > > Please add asterisk '*' on line#322.\n> > >\n> > >\n> > > 816 +static void\n> > > 817 +FdwXactPrepareForeignTransactions(void)\n> > > 818 +{\n> > > 819 + ListCell *lcell;\n> > >\n> > > Let's have this variable name as \"lc\" like elsewhere.\n> > >\n> > >\n> > > 1036 + ereport(ERROR, (errmsg(\"could not insert a foreign\n> transaction entry\"),\n> > > 1037 + errdetail(\"duplicate entry with\n> transaction id %u, serverid %u, userid %u\",\n> > > 1038 + xid, serverid, userid)));\n> > > 1039 + }\n> > >\n> > > Incorrect formatting.\n> > >\n> > >\n> > > 1166 +/*\n> > > 1167 + * Return true and set FdwXactAtomicCommitReady to true if the\n> current transaction\n> > >\n> > > Do you mean ForeignTwophaseCommitIsRequired instead of\n> FdwXactAtomicCommitReady?\n> > >\n> > >\n> > > 3529 +\n> > > 3530 +/*\n> > > 3531 + * FdwXactLauncherRegister\n> > > 3532 + * Register a background worker running the foreign\n> transaction\n> > > 3533 + * launcher.\n> > > 3534 + */\n> > >\n> > > This prolog style is not consistent with the other function in the\n> file.\n> > >\n> > >\n> > > And here are the few typos:\n> > >\n> > > s/conssitent/consistent\n> > > s/consisnts/consist\n> > > s/Foriegn/Foreign\n> > > s/tranascation/transaction\n> > > s/itselft/itself\n> > > s/rolbacked/rollbacked\n> > > s/trasaction/transaction\n> > > s/transactio/transaction\n> > > s/automically/automatically\n> > > s/CommitForeignTransaciton/CommitForeignTransaction\n> > > s/Similary/Similarly\n> > > s/FDWACT_/FDWXACT_\n> > > s/dink/disk\n> > > s/requried/required\n> > > s/trasactions/transactions\n> > > s/prepread/prepared\n> > > s/preapred/prepared\n> > > s/beging/being\n> > > s/gxact/xact\n> > > s/in-dbout/in-doubt\n> > > s/respecitively/respectively\n> > > s/transction/transaction\n> > > s/idenetifier/identifier\n> > > s/identifer/identifier\n> > > s/checkpoint'S/checkpoint's\n> > > s/fo/of\n> > > s/transcation/transaction\n> > > s/trasanction/transaction\n> > > s/non-temprary/non-temporary\n> > > s/resovler_internal.h/resolver_internal.h\n> > >\n> > >\n> >\n> > Thank you for reviewing the patch! I've incorporated all comments in\n> > local branch.\n>\n> Attached the updated version patch sets that incorporated review\n> comments from Amul and Muhammad.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHi Sawada San,\n\nI have been further reviewing and testing the transaction involving\nmultiple server patches.\nOverall the patches are working as expected bar a few important exceptions.\nSo as discussed over the call I have fixed the issues I found during the\ntesting\nand also rebased the patches with the current head of the master branch.\nSo can you please have a look at the attached updated patches.\n\nBelow is the list of changes I have made on top of V18 patches.\n\n1- In register_fdwxact(), As we are just storing the callback function\npointers from\nFdwRoutine in fdw_part structure, So I think we can avoid calling\nGetFdwRoutineByServerId() in TopMemoryContext.\nSo I have moved the MemoryContextSwitch to TopMemoryContext after the\nGetFdwRoutineByServerId() call.\n\n\n2- If PrepareForeignTransaction functionality is not present in some FDW\nthen\nduring the registration process we should only set the\nXACT_FLAGS_FDWNOPREPARE\ntransaction flag if the modified flag is also set for that server. As for\nthe server that has\nnot done any data modification within the transaction we do not do\ntwo-phase commit anyway.\n\n3- I have moved the foreign_twophase_commit in sample file after\nmax_foreign_transaction_resolvers because the default value of\nmax_foreign_transaction_resolvers\nis 0 and enabling the foreign_twophase_commit produces an error with default\nconfiguration parameter positioning in postgresql.conf\nAlso, foreign_twophase_commit configuration was missing the comments\nabout allowed values in the sample config file.\n\n4- Setting ForeignTwophaseCommitIsRequired in\nis_foreign_twophase_commit_required()\nfunction does not seem to be the correct place. The reason being, even when\n*is_foreign_twophase_commit_required() *returns true after setting\nForeignTwophaseCommitIsRequired\nto true, we could still end up not using the two-phase commit in the case\nwhen some server does\nnot support two-phase commit and foreign_twophase_commit is set to\nFOREIGN_TWOPHASE_COMMIT_PREFER\nmode. So I have moved the ForeignTwophaseCommitIsRequired assignment to\nPreCommit_FdwXacts()\nfunction after doing the prepare transaction.\n\n6- In prefer mode, we commit the transaction in single-phase if the server\ndoes not support\nthe two-phase commit. But instead of doing the single-phase commit right\naway,\nIMHO the better way is to wait until all the two-phase transactions are\nsuccessfully prepared\non servers that support the two-phase. Since an error during a \"PREPARE\"\nstage would\nrollback the transaction and in that case, we would end up with committed\ntransactions on\nthe server that lacks the support of the two-phase commit.\nSo I have modified the flow a little bit and instead of doing a one-phase\ncommit right away\nthe servers that do not support a two-phase commit is added to another list\nand that list is\nprocessed after once we have successfully prepared all the transactions on\ntwo-phase supported\nforeign servers. Although this technique is also not bulletproof, still it\nis better than doing\nthe one-phase commits before doing the PREPAREs.\n\nAlso, I think we can improve on this one by throwing an error even in PREFER\nmode if there is more than one server that had data modified within the\ntransaction\nand lacks the two-phase commit support.\n\n7- Added a pfree() and list_free_deep() in PreCommit_FdwXacts() to reclaim\nthe\nmemory if fdw_part is removed from the list\n\n8- The function FdwXactWaitToBeResolved() was bailing out as soon as it\nfinds\n(FdwXactParticipants == NIL). The problem with that was in the case of\n\"COMMIT/ROLLBACK PREPARED\" we always get FdwXactParticipants = NIL and\neffectively the foreign prepared transactions(if any) associated with\nlocally\nprepared transactions were never getting resolved automatically.\n\n\npostgres=# BEGIN;\nBEGIN\nINSERT INTO test_local VALUES ( 2, 'TWO');\nINSERT 0 1\nINSERT INTO test_foreign_s1 VALUES ( 2, 'TWO');\nINSERT 0 1\nINSERT INTO test_foreign_s2 VALUES ( 2, 'TWO');\nINSERT 0 1\npostgres=*# PREPARE TRANSACTION 'local_prepared';\nPREPARE TRANSACTION\n\npostgres=# select * from pg_foreign_xacts ;\ndbid | xid | serverid | userid | status | in_doubt | identifier\n\n-------+-----+----------+--------+----------+----------+----------------------------\n 12929 | 515 | 16389 | 10 | prepared | f |\nfx_1339567411_515_16389_10\n 12929 | 515 | 16391 | 10 | prepared | f |\nfx_1963224020_515_16391_10\n(2 rows)\n\n-- Now commit the prepared transaction\n\npostgres=# COMMIT PREPARED 'local_prepared';\n\n\n\nCOMMIT PREPARED\n\n--Foreign prepared transactions associated with 'local_prepared' not\nresolved\n\npostgres=#\npostgres=# select * from pg_foreign_xacts ;\ndbid | xid | serverid | userid | status | in_doubt | identifier\n\n-------+-----+----------+--------+----------+----------+----------------------------\n 12929 | 515 | 16389 | 10 | prepared | f |\nfx_1339567411_515_16389_10\n 12929 | 515 | 16391 | 10 | prepared | f |\nfx_1963224020_515_16391_10\n(2 rows)\n\n\nSo to fix this in case of the two-phase transaction, the function checks\nthe existence\nof associated foreign prepared transactions before bailing out.\n\n9- In function XlogReadFdwXactData() XLogBeginRead call was missing before\nXLogReadRecord()\nthat was causing the crash during recovery.\n\n10- incorporated set_ps_display() signature change.\n\n\nBest regards,\n\n...\nMuhammad Usama\nHighGo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca",
"msg_date": "Fri, 27 Mar 2020 18:06:14 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 27 Mar 2020 at 22:06, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n> Hi Sawada San,\n>\n> I have been further reviewing and testing the transaction involving multiple server patches.\n> Overall the patches are working as expected bar a few important exceptions.\n> So as discussed over the call I have fixed the issues I found during the testing\n> and also rebased the patches with the current head of the master branch.\n> So can you please have a look at the attached updated patches.\n\nThank you for reviewing and updating the patch!\n\n>\n> Below is the list of changes I have made on top of V18 patches.\n>\n> 1- In register_fdwxact(), As we are just storing the callback function pointers from\n> FdwRoutine in fdw_part structure, So I think we can avoid calling\n> GetFdwRoutineByServerId() in TopMemoryContext.\n> So I have moved the MemoryContextSwitch to TopMemoryContext after the\n> GetFdwRoutineByServerId() call.\n\nAgreed.\n\n>\n>\n> 2- If PrepareForeignTransaction functionality is not present in some FDW then\n> during the registration process we should only set the XACT_FLAGS_FDWNOPREPARE\n> transaction flag if the modified flag is also set for that server. As for the server that has\n> not done any data modification within the transaction we do not do two-phase commit anyway.\n\nAgreed.\n\n>\n> 3- I have moved the foreign_twophase_commit in sample file after\n> max_foreign_transaction_resolvers because the default value of max_foreign_transaction_resolvers\n> is 0 and enabling the foreign_twophase_commit produces an error with default\n> configuration parameter positioning in postgresql.conf\n> Also, foreign_twophase_commit configuration was missing the comments\n> about allowed values in the sample config file.\n\nSounds good. Agreed.\n\n>\n> 4- Setting ForeignTwophaseCommitIsRequired in is_foreign_twophase_commit_required()\n> function does not seem to be the correct place. The reason being, even when\n> is_foreign_twophase_commit_required() returns true after setting ForeignTwophaseCommitIsRequired\n> to true, we could still end up not using the two-phase commit in the case when some server does\n> not support two-phase commit and foreign_twophase_commit is set to FOREIGN_TWOPHASE_COMMIT_PREFER\n> mode. So I have moved the ForeignTwophaseCommitIsRequired assignment to PreCommit_FdwXacts()\n> function after doing the prepare transaction.\n\nAgreed.\n\n>\n> 6- In prefer mode, we commit the transaction in single-phase if the server does not support\n> the two-phase commit. But instead of doing the single-phase commit right away,\n> IMHO the better way is to wait until all the two-phase transactions are successfully prepared\n> on servers that support the two-phase. Since an error during a \"PREPARE\" stage would\n> rollback the transaction and in that case, we would end up with committed transactions on\n> the server that lacks the support of the two-phase commit.\n\nWhen an error occurred before the local commit, a 2pc-unsupported\nserver could be rolled back or committed depending on the error\ntiming. On the other hand all 2pc-supported servers are always rolled\nback when an error occurred before the local commit. Therefore even if\nwe change the order of COMMIT and PREPARE it is still possible that we\nwill end up committing the part of 2pc-unsupported servers while\nrolling back others including 2pc-supported servers.\n\nI guess the motivation of your change is that since errors are likely\nto happen during executing PREPARE on foreign servers, we can minimize\nthe possibility of rolling back 2pc-unsupported servers by deferring\nthe commit of 2pc-unsupported server as much as possible. Is that\nright?\n\n> So I have modified the flow a little bit and instead of doing a one-phase commit right away\n> the servers that do not support a two-phase commit is added to another list and that list is\n> processed after once we have successfully prepared all the transactions on two-phase supported\n> foreign servers. Although this technique is also not bulletproof, still it is better than doing\n> the one-phase commits before doing the PREPAREs.\n\nHmm the current logic seems complex. Maybe we can just reverse the\norder of COMMIT and PREPARE; do PREPARE on all 2pc-supported and\nmodified servers first and then do COMMIT on others?\n\n>\n> Also, I think we can improve on this one by throwing an error even in PREFER\n> mode if there is more than one server that had data modified within the transaction\n> and lacks the two-phase commit support.\n>\n\nIIUC the concept of PREFER mode is that the transaction uses 2pc only\nfor 2pc-supported servers. IOW, even if the transaction modifies on a\n2pc-unsupported server we can proceed with the commit if in PREFER\nmode, which cannot if in REQUIRED mode. What is the motivation of your\nabove idea?\n\n> 7- Added a pfree() and list_free_deep() in PreCommit_FdwXacts() to reclaim the\n> memory if fdw_part is removed from the list\n\nI think at the end of the transaction we free entries of\nFdwXactParticipants list and set FdwXactParticipants to NIL. Why do we\nneed to do that in PreCommit_FdwXacts()?\n\n>\n> 8- The function FdwXactWaitToBeResolved() was bailing out as soon as it finds\n> (FdwXactParticipants == NIL). The problem with that was in the case of\n> \"COMMIT/ROLLBACK PREPARED\" we always get FdwXactParticipants = NIL and\n> effectively the foreign prepared transactions(if any) associated with locally\n> prepared transactions were never getting resolved automatically.\n>\n>\n> postgres=# BEGIN;\n> BEGIN\n> INSERT INTO test_local VALUES ( 2, 'TWO');\n> INSERT 0 1\n> INSERT INTO test_foreign_s1 VALUES ( 2, 'TWO');\n> INSERT 0 1\n> INSERT INTO test_foreign_s2 VALUES ( 2, 'TWO');\n> INSERT 0 1\n> postgres=*# PREPARE TRANSACTION 'local_prepared';\n> PREPARE TRANSACTION\n>\n> postgres=# select * from pg_foreign_xacts ;\n> dbid | xid | serverid | userid | status | in_doubt | identifier\n> -------+-----+----------+--------+----------+----------+----------------------------\n> 12929 | 515 | 16389 | 10 | prepared | f | fx_1339567411_515_16389_10\n> 12929 | 515 | 16391 | 10 | prepared | f | fx_1963224020_515_16391_10\n> (2 rows)\n>\n> -- Now commit the prepared transaction\n>\n> postgres=# COMMIT PREPARED 'local_prepared';\n>\n> COMMIT PREPARED\n>\n> --Foreign prepared transactions associated with 'local_prepared' not resolved\n>\n> postgres=#\n>\n> postgres=# select * from pg_foreign_xacts ;\n> dbid | xid | serverid | userid | status | in_doubt | identifier\n> -------+-----+----------+--------+----------+----------+----------------------------\n> 12929 | 515 | 16389 | 10 | prepared | f | fx_1339567411_515_16389_10\n> 12929 | 515 | 16391 | 10 | prepared | f | fx_1963224020_515_16391_10\n> (2 rows)\n>\n>\n> So to fix this in case of the two-phase transaction, the function checks the existence\n> of associated foreign prepared transactions before bailing out.\n>\n\nGood catch. But looking at your change, we should not accept the case\nwhere FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) ==\nfalse.\n\n if (FdwXactParticipants == NIL)\n {\n /*\n * If we are here because of COMMIT/ROLLBACK PREPARED then the\n * FdwXactParticipants list would be empty. So we need to\n * see if there are any foreign prepared transactions exists\n * for this prepared transaction\n */\n if (TwoPhaseExists(wait_xid))\n {\n List *foreign_trans = NIL;\n\n foreign_trans = get_fdwxacts(MyDatabaseId,\nwait_xid, InvalidOid, InvalidOid,\n false, false, true);\n\n if (foreign_trans == NIL)\n return;\n list_free(foreign_trans);\n }\n }\n\n> 9- In function XlogReadFdwXactData() XLogBeginRead call was missing before XLogReadRecord()\n> that was causing the crash during recovery.\n\nAgreed.\n\n>\n> 10- incorporated set_ps_display() signature change.\n\nThanks.\n\nRegarding other changes you did in v19 patch, I have some comments:\n\n1.\n+ ereport(LOG,\n+ (errmsg(\"trying to %s the foreign transaction\nassociated with transaction %u on server %u\",\n+ fdwxact->status ==\nFDWXACT_STATUS_COMMITTING?\"COMMIT\":\"ABORT\",\n+ fdwxact->local_xid,\nfdwxact->serverid)));\n+\n\nWhy do we need to emit LOG message in pg_resolve_foreign_xact() SQL function?\n\n2.\ndiff --git a/src/bin/pg_waldump/fdwxactdesc.c b/src/bin/pg_waldump/fdwxactdesc.c\ndeleted file mode 120000\nindex ce8c21880c..0000000000\n--- a/src/bin/pg_waldump/fdwxactdesc.c\n+++ /dev/null\n@@ -1 +0,0 @@\n-../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n\\ No newline at end of file\ndiff --git a/src/bin/pg_waldump/fdwxactdesc.c b/src/bin/pg_waldump/fdwxactdesc.c\nnew file mode 100644\nindex 0000000000..ce8c21880c\n--- /dev/null\n+++ b/src/bin/pg_waldump/fdwxactdesc.c\n@@ -0,0 +1 @@\n+../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n\nWe need to remove src/bin/pg_waldump/fdwxactdesc.c from the patch.\n\n3.\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -1526,14 +1526,14 @@ postgres 27093 0.0 0.0 30096 2752 ?\n Ss 11:34 0:00 postgres: ser\n <entry><literal>SafeSnapshot</literal></entry>\n <entry>Waiting for a snapshot for a <literal>READ ONLY\nDEFERRABLE</literal> transaction.</entry>\n </row>\n- <row>\n- <entry><literal>SyncRep</literal></entry>\n- <entry>Waiting for confirmation from remote server during\nsynchronous replication.</entry>\n- </row>\n <row>\n <entry><literal>FdwXactResolution</literal></entry>\n <entry>Waiting for all foreign transaction participants to\nbe resolved during atomic commit among foreign servers.</entry>\n </row>\n+ <row>\n+ <entry><literal>SyncRep</literal></entry>\n+ <entry>Waiting for confirmation from remote server during\nsynchronous replication.</entry>\n+ </row>\n <row>\n <entry morerows=\"4\"><literal>Timeout</literal></entry>\n <entry><literal>BaseBackupThrottle</literal></entry>\n\nWe need to move the entry of FdwXactResolution to right before\nHash/Batch/Allocating for alphabetical order.\n\nI've incorporated your changes I agreed with to my local branch and\nwill incorporate other changes after discussion. I'll also do more\ntest and self-review and will submit the latest version patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Apr 2020 15:16:17 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 11:16 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Fri, 27 Mar 2020 at 22:06, Muhammad Usama <m.usama@gmail.com> wrote:\n> >\n> > Hi Sawada San,\n> >\n> > I have been further reviewing and testing the transaction involving\n> multiple server patches.\n> > Overall the patches are working as expected bar a few important\n> exceptions.\n> > So as discussed over the call I have fixed the issues I found during the\n> testing\n> > and also rebased the patches with the current head of the master branch.\n> > So can you please have a look at the attached updated patches.\n>\n> Thank you for reviewing and updating the patch!\n>\n> >\n> > Below is the list of changes I have made on top of V18 patches.\n> >\n> > 1- In register_fdwxact(), As we are just storing the callback function\n> pointers from\n> > FdwRoutine in fdw_part structure, So I think we can avoid calling\n> > GetFdwRoutineByServerId() in TopMemoryContext.\n> > So I have moved the MemoryContextSwitch to TopMemoryContext after the\n> > GetFdwRoutineByServerId() call.\n>\n> Agreed.\n>\n> >\n> >\n> > 2- If PrepareForeignTransaction functionality is not present in some FDW\n> then\n> > during the registration process we should only set the\n> XACT_FLAGS_FDWNOPREPARE\n> > transaction flag if the modified flag is also set for that server. As\n> for the server that has\n> > not done any data modification within the transaction we do not do\n> two-phase commit anyway.\n>\n> Agreed.\n>\n> >\n> > 3- I have moved the foreign_twophase_commit in sample file after\n> > max_foreign_transaction_resolvers because the default value of\n> max_foreign_transaction_resolvers\n> > is 0 and enabling the foreign_twophase_commit produces an error with\n> default\n> > configuration parameter positioning in postgresql.conf\n> > Also, foreign_twophase_commit configuration was missing the comments\n> > about allowed values in the sample config file.\n>\n> Sounds good. Agreed.\n>\n> >\n> > 4- Setting ForeignTwophaseCommitIsRequired in\n> is_foreign_twophase_commit_required()\n> > function does not seem to be the correct place. The reason being, even\n> when\n> > is_foreign_twophase_commit_required() returns true after setting\n> ForeignTwophaseCommitIsRequired\n> > to true, we could still end up not using the two-phase commit in the\n> case when some server does\n> > not support two-phase commit and foreign_twophase_commit is set to\n> FOREIGN_TWOPHASE_COMMIT_PREFER\n> > mode. So I have moved the ForeignTwophaseCommitIsRequired assignment to\n> PreCommit_FdwXacts()\n> > function after doing the prepare transaction.\n>\n> Agreed.\n>\n> >\n> > 6- In prefer mode, we commit the transaction in single-phase if the\n> server does not support\n> > the two-phase commit. But instead of doing the single-phase commit right\n> away,\n> > IMHO the better way is to wait until all the two-phase transactions are\n> successfully prepared\n> > on servers that support the two-phase. Since an error during a \"PREPARE\"\n> stage would\n> > rollback the transaction and in that case, we would end up with\n> committed transactions on\n> > the server that lacks the support of the two-phase commit.\n>\n> When an error occurred before the local commit, a 2pc-unsupported\n> server could be rolled back or committed depending on the error\n> timing. On the other hand all 2pc-supported servers are always rolled\n> back when an error occurred before the local commit. Therefore even if\n> we change the order of COMMIT and PREPARE it is still possible that we\n> will end up committing the part of 2pc-unsupported servers while\n> rolling back others including 2pc-supported servers.\n>\n> I guess the motivation of your change is that since errors are likely\n> to happen during executing PREPARE on foreign servers, we can minimize\n> the possibility of rolling back 2pc-unsupported servers by deferring\n> the commit of 2pc-unsupported server as much as possible. Is that\n> right?\n>\n\nYes, that is correct. The idea of doing the COMMIT on NON-2pc-supported\nservers\nafter all the PREPAREs are successful is to minimize the chances of partial\ncommits.\nAnd as you mentioned there will still be chances of getting a partial\ncommit even with\nthis approach but the probability of that would be less than what it is\nwith the\ncurrent sequence.\n\n\n\n>\n> > So I have modified the flow a little bit and instead of doing a\n> one-phase commit right away\n> > the servers that do not support a two-phase commit is added to another\n> list and that list is\n> > processed after once we have successfully prepared all the transactions\n> on two-phase supported\n> > foreign servers. Although this technique is also not bulletproof, still\n> it is better than doing\n> > the one-phase commits before doing the PREPAREs.\n>\n> Hmm the current logic seems complex. Maybe we can just reverse the\n> order of COMMIT and PREPARE; do PREPARE on all 2pc-supported and\n> modified servers first and then do COMMIT on others?\n>\n\nAgreed, seems reasonable.\n\n>\n> >\n> > Also, I think we can improve on this one by throwing an error even in\n> PREFER\n> > mode if there is more than one server that had data modified within the\n> transaction\n> > and lacks the two-phase commit support.\n> >\n>\n> IIUC the concept of PREFER mode is that the transaction uses 2pc only\n> for 2pc-supported servers. IOW, even if the transaction modifies on a\n> 2pc-unsupported server we can proceed with the commit if in PREFER\n> mode, which cannot if in REQUIRED mode. What is the motivation of your\n> above idea?\n>\n\nI was thinking that we could change the behavior of PREFER mode such that\nwe only allow\nto COMMIT the transaction if the transaction needs to do a single-phase\ncommit on one\nserver only. That way we can ensure that we would never end up with partial\ncommit.\n\nOne Idea in this regards would be to switch the local transaction to commit\nusing 2pc\nif there is a total of only one foreign server that does not support the\n2pc in the transaction,\nensuring that 1-pc commit servers should always be less than or equal to 1.\nand if there are more\nthan one foreign server requires 1-pc then we just throw an error.\n\nHowever having said that, I am not 100% sure if its a good or an acceptable\nIdea, and\nI am okay with continuing with the current behavior of PREFER mode if we\nput it in the\ndocument that this mode can cause a partial commit.\n\n\n> > 7- Added a pfree() and list_free_deep() in PreCommit_FdwXacts() to\n> reclaim the\n> > memory if fdw_part is removed from the list\n>\n> I think at the end of the transaction we free entries of\n> FdwXactParticipants list and set FdwXactParticipants to NIL. Why do we\n> need to do that in PreCommit_FdwXacts()?\n>\n\nCorrect me if I am wrong, The fdw_part structures are created in\nTopMemoryContext\nand if that fdw_part structure is removed from the list at pre_commit stage\n(because we did 1-PC COMMIT on it) then it would leak memory.\n\n\n> >\n> > 8- The function FdwXactWaitToBeResolved() was bailing out as soon as it\n> finds\n> > (FdwXactParticipants == NIL). The problem with that was in the case of\n> > \"COMMIT/ROLLBACK PREPARED\" we always get FdwXactParticipants = NIL and\n> > effectively the foreign prepared transactions(if any) associated with\n> locally\n> > prepared transactions were never getting resolved automatically.\n> >\n> >\n> > postgres=# BEGIN;\n> > BEGIN\n> > INSERT INTO test_local VALUES ( 2, 'TWO');\n> > INSERT 0 1\n> > INSERT INTO test_foreign_s1 VALUES ( 2, 'TWO');\n> > INSERT 0 1\n> > INSERT INTO test_foreign_s2 VALUES ( 2, 'TWO');\n> > INSERT 0 1\n> > postgres=*# PREPARE TRANSACTION 'local_prepared';\n> > PREPARE TRANSACTION\n> >\n> > postgres=# select * from pg_foreign_xacts ;\n> > dbid | xid | serverid | userid | status | in_doubt |\n> identifier\n> >\n> -------+-----+----------+--------+----------+----------+----------------------------\n> > 12929 | 515 | 16389 | 10 | prepared | f |\n> fx_1339567411_515_16389_10\n> > 12929 | 515 | 16391 | 10 | prepared | f |\n> fx_1963224020_515_16391_10\n> > (2 rows)\n> >\n> > -- Now commit the prepared transaction\n> >\n> > postgres=# COMMIT PREPARED 'local_prepared';\n> >\n> > COMMIT PREPARED\n> >\n> > --Foreign prepared transactions associated with 'local_prepared' not\n> resolved\n> >\n> > postgres=#\n> >\n> > postgres=# select * from pg_foreign_xacts ;\n> > dbid | xid | serverid | userid | status | in_doubt |\n> identifier\n> >\n> -------+-----+----------+--------+----------+----------+----------------------------\n> > 12929 | 515 | 16389 | 10 | prepared | f |\n> fx_1339567411_515_16389_10\n> > 12929 | 515 | 16391 | 10 | prepared | f |\n> fx_1963224020_515_16391_10\n> > (2 rows)\n> >\n> >\n> > So to fix this in case of the two-phase transaction, the function checks\n> the existence\n> > of associated foreign prepared transactions before bailing out.\n> >\n>\n> Good catch. But looking at your change, we should not accept the case\n> where FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) ==\n> false.\n>\n> if (FdwXactParticipants == NIL)\n> {\n> /*\n> * If we are here because of COMMIT/ROLLBACK PREPARED then\n> the\n> * FdwXactParticipants list would be empty. So we need to\n> * see if there are any foreign prepared transactions exists\n> * for this prepared transaction\n> */\n> if (TwoPhaseExists(wait_xid))\n> {\n> List *foreign_trans = NIL;\n>\n> foreign_trans = get_fdwxacts(MyDatabaseId,\n> wait_xid, InvalidOid, InvalidOid,\n> false, false, true);\n>\n> if (foreign_trans == NIL)\n> return;\n> list_free(foreign_trans);\n> }\n> }\n>\n>\nSorry my bad, its a mistake on my part. we should just return from the\nfunction when\nFdwXactParticipants == NULL but TwoPhaseExists(wait_xid) == false.\n\n if (TwoPhaseExists(wait_xid))\n {\n List *foreign_trans = NIL;\n foreign_trans = get_fdwxacts(MyDatabaseId, wait_xid,\nInvalidOid, InvalidOid,\n false, false, true);\n\n if (foreign_trans == NIL)\n return;\n list_free(foreign_trans);\n }\n else\n return;\n\n\n> > 9- In function XlogReadFdwXactData() XLogBeginRead call was missing\n> before XLogReadRecord()\n> > that was causing the crash during recovery.\n>\n> Agreed.\n>\n> >\n> > 10- incorporated set_ps_display() signature change.\n>\n> Thanks.\n>\n> Regarding other changes you did in v19 patch, I have some comments:\n>\n> 1.\n> + ereport(LOG,\n> + (errmsg(\"trying to %s the foreign transaction\n> associated with transaction %u on server %u\",\n> + fdwxact->status ==\n> FDWXACT_STATUS_COMMITTING?\"COMMIT\":\"ABORT\",\n> + fdwxact->local_xid,\n> fdwxact->serverid)));\n> +\n>\n> Why do we need to emit LOG message in pg_resolve_foreign_xact() SQL\n> function?\n>\n\nThat change was not intended to get into the patch file. I had done it\nduring testing to\nquickly get info on which way the transaction is going to be resolved.\n\n\n> 2.\n> diff --git a/src/bin/pg_waldump/fdwxactdesc.c\n> b/src/bin/pg_waldump/fdwxactdesc.c\n> deleted file mode 120000\n> index ce8c21880c..0000000000\n> --- a/src/bin/pg_waldump/fdwxactdesc.c\n> +++ /dev/null\n> @@ -1 +0,0 @@\n> -../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n> \\ No newline at end of file\n> diff --git a/src/bin/pg_waldump/fdwxactdesc.c\n> b/src/bin/pg_waldump/fdwxactdesc.c\n> new file mode 100644\n> index 0000000000..ce8c21880c\n> --- /dev/null\n> +++ b/src/bin/pg_waldump/fdwxactdesc.c\n> @@ -0,0 +1 @@\n> +../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n>\n> We need to remove src/bin/pg_waldump/fdwxactdesc.c from the patch.\n>\n\nAgain sorry! that was an oversight on my part.\n\n\n> 3.\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -1526,14 +1526,14 @@ postgres 27093 0.0 0.0 30096 2752 ?\n> Ss 11:34 0:00 postgres: ser\n> <entry><literal>SafeSnapshot</literal></entry>\n> <entry>Waiting for a snapshot for a <literal>READ ONLY\n> DEFERRABLE</literal> transaction.</entry>\n> </row>\n> - <row>\n> - <entry><literal>SyncRep</literal></entry>\n> - <entry>Waiting for confirmation from remote server during\n> synchronous replication.</entry>\n> - </row>\n> <row>\n> <entry><literal>FdwXactResolution</literal></entry>\n> <entry>Waiting for all foreign transaction participants to\n> be resolved during atomic commit among foreign servers.</entry>\n> </row>\n> + <row>\n> + <entry><literal>SyncRep</literal></entry>\n> + <entry>Waiting for confirmation from remote server during\n> synchronous replication.</entry>\n> + </row>\n> <row>\n> <entry morerows=\"4\"><literal>Timeout</literal></entry>\n> <entry><literal>BaseBackupThrottle</literal></entry>\n>\n> We need to move the entry of FdwXactResolution to right before\n> Hash/Batch/Allocating for alphabetical order.\n>\n\nAgreed!\n\n>\n> I've incorporated your changes I agreed with to my local branch and\n> will incorporate other changes after discussion. I'll also do more\n> test and self-review and will submit the latest version patch.\n>\n>\nMeanwhile, I found a couple of more small issues, One is the break\nstatement missing\ni n pgstat_get_wait_ipc() and secondly fdwxact_relaunch_resolvers()\ncould return un-initialized value.\nI am attaching a small patch for these changes that can be applied on top\nof existing\npatches.\n\n\n\nRegards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\nBest Regards,\nMuhammad Usama\nHighgo Software\nURL : http://www.highgo.ca",
"msg_date": "Tue, 28 Apr 2020 15:37:11 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 28 Apr 2020 at 19:37, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n>\n> On Wed, Apr 8, 2020 at 11:16 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 27 Mar 2020 at 22:06, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >\n>> > Hi Sawada San,\n>> >\n>> > I have been further reviewing and testing the transaction involving multiple server patches.\n>> > Overall the patches are working as expected bar a few important exceptions.\n>> > So as discussed over the call I have fixed the issues I found during the testing\n>> > and also rebased the patches with the current head of the master branch.\n>> > So can you please have a look at the attached updated patches.\n>>\n>> Thank you for reviewing and updating the patch!\n>>\n>> >\n>> > Below is the list of changes I have made on top of V18 patches.\n>> >\n>> > 1- In register_fdwxact(), As we are just storing the callback function pointers from\n>> > FdwRoutine in fdw_part structure, So I think we can avoid calling\n>> > GetFdwRoutineByServerId() in TopMemoryContext.\n>> > So I have moved the MemoryContextSwitch to TopMemoryContext after the\n>> > GetFdwRoutineByServerId() call.\n>>\n>> Agreed.\n>>\n>> >\n>> >\n>> > 2- If PrepareForeignTransaction functionality is not present in some FDW then\n>> > during the registration process we should only set the XACT_FLAGS_FDWNOPREPARE\n>> > transaction flag if the modified flag is also set for that server. As for the server that has\n>> > not done any data modification within the transaction we do not do two-phase commit anyway.\n>>\n>> Agreed.\n>>\n>> >\n>> > 3- I have moved the foreign_twophase_commit in sample file after\n>> > max_foreign_transaction_resolvers because the default value of max_foreign_transaction_resolvers\n>> > is 0 and enabling the foreign_twophase_commit produces an error with default\n>> > configuration parameter positioning in postgresql.conf\n>> > Also, foreign_twophase_commit configuration was missing the comments\n>> > about allowed values in the sample config file.\n>>\n>> Sounds good. Agreed.\n>>\n>> >\n>> > 4- Setting ForeignTwophaseCommitIsRequired in is_foreign_twophase_commit_required()\n>> > function does not seem to be the correct place. The reason being, even when\n>> > is_foreign_twophase_commit_required() returns true after setting ForeignTwophaseCommitIsRequired\n>> > to true, we could still end up not using the two-phase commit in the case when some server does\n>> > not support two-phase commit and foreign_twophase_commit is set to FOREIGN_TWOPHASE_COMMIT_PREFER\n>> > mode. So I have moved the ForeignTwophaseCommitIsRequired assignment to PreCommit_FdwXacts()\n>> > function after doing the prepare transaction.\n>>\n>> Agreed.\n>>\n>> >\n>> > 6- In prefer mode, we commit the transaction in single-phase if the server does not support\n>> > the two-phase commit. But instead of doing the single-phase commit right away,\n>> > IMHO the better way is to wait until all the two-phase transactions are successfully prepared\n>> > on servers that support the two-phase. Since an error during a \"PREPARE\" stage would\n>> > rollback the transaction and in that case, we would end up with committed transactions on\n>> > the server that lacks the support of the two-phase commit.\n>>\n>> When an error occurred before the local commit, a 2pc-unsupported\n>> server could be rolled back or committed depending on the error\n>> timing. On the other hand all 2pc-supported servers are always rolled\n>> back when an error occurred before the local commit. Therefore even if\n>> we change the order of COMMIT and PREPARE it is still possible that we\n>> will end up committing the part of 2pc-unsupported servers while\n>> rolling back others including 2pc-supported servers.\n>>\n>> I guess the motivation of your change is that since errors are likely\n>> to happen during executing PREPARE on foreign servers, we can minimize\n>> the possibility of rolling back 2pc-unsupported servers by deferring\n>> the commit of 2pc-unsupported server as much as possible. Is that\n>> right?\n>\n>\n> Yes, that is correct. The idea of doing the COMMIT on NON-2pc-supported servers\n> after all the PREPAREs are successful is to minimize the chances of partial commits.\n> And as you mentioned there will still be chances of getting a partial commit even with\n> this approach but the probability of that would be less than what it is with the\n> current sequence.\n>\n>\n>>\n>>\n>> > So I have modified the flow a little bit and instead of doing a one-phase commit right away\n>> > the servers that do not support a two-phase commit is added to another list and that list is\n>> > processed after once we have successfully prepared all the transactions on two-phase supported\n>> > foreign servers. Although this technique is also not bulletproof, still it is better than doing\n>> > the one-phase commits before doing the PREPAREs.\n>>\n>> Hmm the current logic seems complex. Maybe we can just reverse the\n>> order of COMMIT and PREPARE; do PREPARE on all 2pc-supported and\n>> modified servers first and then do COMMIT on others?\n>\n>\n> Agreed, seems reasonable.\n>>\n>>\n>> >\n>> > Also, I think we can improve on this one by throwing an error even in PREFER\n>> > mode if there is more than one server that had data modified within the transaction\n>> > and lacks the two-phase commit support.\n>> >\n>>\n>> IIUC the concept of PREFER mode is that the transaction uses 2pc only\n>> for 2pc-supported servers. IOW, even if the transaction modifies on a\n>> 2pc-unsupported server we can proceed with the commit if in PREFER\n>> mode, which cannot if in REQUIRED mode. What is the motivation of your\n>> above idea?\n>\n>\n> I was thinking that we could change the behavior of PREFER mode such that we only allow\n> to COMMIT the transaction if the transaction needs to do a single-phase commit on one\n> server only. That way we can ensure that we would never end up with partial commit.\n>\n\nI think it's good to avoid a partial commit by using your idea but if\nwe want to avoid a partial commit we can use the 'required' mode,\nwhich requires all participant servers to support 2pc. We throw an\nerror if participant servers include even one 2pc-unsupported server\nis modified within the transaction. Of course if the participant node\nis only one 2pc-unsupported server it can use 1pc even in the\n'required' mode.\n\n> One Idea in this regards would be to switch the local transaction to commit using 2pc\n> if there is a total of only one foreign server that does not support the 2pc in the transaction,\n> ensuring that 1-pc commit servers should always be less than or equal to 1. and if there are more\n> than one foreign server requires 1-pc then we just throw an error.\n\nI might be missing your point but I suppose this idea is to do\nsomething like the following?\n\n1. prepare the local transaction\n2. commit the foreign transaction on 2pc-unsupported server\n3. commit the prepared local transaction\n\n>\n> However having said that, I am not 100% sure if its a good or an acceptable Idea, and\n> I am okay with continuing with the current behavior of PREFER mode if we put it in the\n> document that this mode can cause a partial commit.\n\nThere will three types of servers: (a) a server doesn't support any\ntransaction API, (b) a server supports only commit and rollback API\nand (c) a server supports all APIs (commit, rollback and prepare).\nCurrently postgres transaction manager manages only server-(b) and\nserver-(c), adds them to FdwXactParticipants. I'm considering changing\nthe code so that it adds also server-(a) to FdwXactParticipants, in\norder to track the number of server-(a) involved in the transaction.\nBut it doesn't insert FdwXact entry for it, and manage transactions on\nthese servers.\n\nThe reason is this; if we want to have the 'required' mode strictly\nrequire all participant servers to support 2pc, we should use 2pc when\n(# of server-(a) + # of server-(b) + # of server-(c)) >= 2. But since\ncurrently we just track the modification on a server-(a) by a flag we\ncannot handle the case where two server-(a) are modified in the\ntransaction. On the other hand, if we don't consider server-(a) the\ntransaction could end up with a partial commit when a server-(a)\nparticipates in the transaction. Therefore I'm thinking of the above\nchange so that the transaction manager can ensure that a partial\ncommit doesn't happen in the 'required' mode. What do you think?\n\n>\n>>\n>> > 7- Added a pfree() and list_free_deep() in PreCommit_FdwXacts() to reclaim the\n>> > memory if fdw_part is removed from the list\n>>\n>> I think at the end of the transaction we free entries of\n>> FdwXactParticipants list and set FdwXactParticipants to NIL. Why do we\n>> need to do that in PreCommit_FdwXacts()?\n>\n>\n> Correct me if I am wrong, The fdw_part structures are created in TopMemoryContext\n> and if that fdw_part structure is removed from the list at pre_commit stage\n> (because we did 1-PC COMMIT on it) then it would leak memory.\n\nThe fdw_part structures are created in TopTransactionContext so these\nare freed at the end of the transaction.\n\n>\n>>\n>> >\n>> > 8- The function FdwXactWaitToBeResolved() was bailing out as soon as it finds\n>> > (FdwXactParticipants == NIL). The problem with that was in the case of\n>> > \"COMMIT/ROLLBACK PREPARED\" we always get FdwXactParticipants = NIL and\n>> > effectively the foreign prepared transactions(if any) associated with locally\n>> > prepared transactions were never getting resolved automatically.\n>> >\n>> >\n>> > postgres=# BEGIN;\n>> > BEGIN\n>> > INSERT INTO test_local VALUES ( 2, 'TWO');\n>> > INSERT 0 1\n>> > INSERT INTO test_foreign_s1 VALUES ( 2, 'TWO');\n>> > INSERT 0 1\n>> > INSERT INTO test_foreign_s2 VALUES ( 2, 'TWO');\n>> > INSERT 0 1\n>> > postgres=*# PREPARE TRANSACTION 'local_prepared';\n>> > PREPARE TRANSACTION\n>> >\n>> > postgres=# select * from pg_foreign_xacts ;\n>> > dbid | xid | serverid | userid | status | in_doubt | identifier\n>> > -------+-----+----------+--------+----------+----------+----------------------------\n>> > 12929 | 515 | 16389 | 10 | prepared | f | fx_1339567411_515_16389_10\n>> > 12929 | 515 | 16391 | 10 | prepared | f | fx_1963224020_515_16391_10\n>> > (2 rows)\n>> >\n>> > -- Now commit the prepared transaction\n>> >\n>> > postgres=# COMMIT PREPARED 'local_prepared';\n>> >\n>> > COMMIT PREPARED\n>> >\n>> > --Foreign prepared transactions associated with 'local_prepared' not resolved\n>> >\n>> > postgres=#\n>> >\n>> > postgres=# select * from pg_foreign_xacts ;\n>> > dbid | xid | serverid | userid | status | in_doubt | identifier\n>> > -------+-----+----------+--------+----------+----------+----------------------------\n>> > 12929 | 515 | 16389 | 10 | prepared | f | fx_1339567411_515_16389_10\n>> > 12929 | 515 | 16391 | 10 | prepared | f | fx_1963224020_515_16391_10\n>> > (2 rows)\n>> >\n>> >\n>> > So to fix this in case of the two-phase transaction, the function checks the existence\n>> > of associated foreign prepared transactions before bailing out.\n>> >\n>>\n>> Good catch. But looking at your change, we should not accept the case\n>> where FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) ==\n>> false.\n>>\n>> if (FdwXactParticipants == NIL)\n>> {\n>> /*\n>> * If we are here because of COMMIT/ROLLBACK PREPARED then the\n>> * FdwXactParticipants list would be empty. So we need to\n>> * see if there are any foreign prepared transactions exists\n>> * for this prepared transaction\n>> */\n>> if (TwoPhaseExists(wait_xid))\n>> {\n>> List *foreign_trans = NIL;\n>>\n>> foreign_trans = get_fdwxacts(MyDatabaseId,\n>> wait_xid, InvalidOid, InvalidOid,\n>> false, false, true);\n>>\n>> if (foreign_trans == NIL)\n>> return;\n>> list_free(foreign_trans);\n>> }\n>> }\n>>\n>\n> Sorry my bad, its a mistake on my part. we should just return from the function when\n> FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) == false.\n>\n> if (TwoPhaseExists(wait_xid))\n> {\n> List *foreign_trans = NIL;\n> foreign_trans = get_fdwxacts(MyDatabaseId, wait_xid, InvalidOid, InvalidOid,\n> false, false, true);\n>\n> if (foreign_trans == NIL)\n> return;\n> list_free(foreign_trans);\n> }\n> else\n> return;\n>\n>>\n>> > 9- In function XlogReadFdwXactData() XLogBeginRead call was missing before XLogReadRecord()\n>> > that was causing the crash during recovery.\n>>\n>> Agreed.\n>>\n>> >\n>> > 10- incorporated set_ps_display() signature change.\n>>\n>> Thanks.\n>>\n>> Regarding other changes you did in v19 patch, I have some comments:\n>>\n>> 1.\n>> + ereport(LOG,\n>> + (errmsg(\"trying to %s the foreign transaction\n>> associated with transaction %u on server %u\",\n>> + fdwxact->status ==\n>> FDWXACT_STATUS_COMMITTING?\"COMMIT\":\"ABORT\",\n>> + fdwxact->local_xid,\n>> fdwxact->serverid)));\n>> +\n>>\n>> Why do we need to emit LOG message in pg_resolve_foreign_xact() SQL function?\n>\n>\n> That change was not intended to get into the patch file. I had done it during testing to\n> quickly get info on which way the transaction is going to be resolved.\n>\n>>\n>> 2.\n>> diff --git a/src/bin/pg_waldump/fdwxactdesc.c b/src/bin/pg_waldump/fdwxactdesc.c\n>> deleted file mode 120000\n>> index ce8c21880c..0000000000\n>> --- a/src/bin/pg_waldump/fdwxactdesc.c\n>> +++ /dev/null\n>> @@ -1 +0,0 @@\n>> -../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n>> \\ No newline at end of file\n>> diff --git a/src/bin/pg_waldump/fdwxactdesc.c b/src/bin/pg_waldump/fdwxactdesc.c\n>> new file mode 100644\n>> index 0000000000..ce8c21880c\n>> --- /dev/null\n>> +++ b/src/bin/pg_waldump/fdwxactdesc.c\n>> @@ -0,0 +1 @@\n>> +../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n>>\n>> We need to remove src/bin/pg_waldump/fdwxactdesc.c from the patch.\n>\n>\n> Again sorry! that was an oversight on my part.\n>\n>>\n>> 3.\n>> --- a/doc/src/sgml/monitoring.sgml\n>> +++ b/doc/src/sgml/monitoring.sgml\n>> @@ -1526,14 +1526,14 @@ postgres 27093 0.0 0.0 30096 2752 ?\n>> Ss 11:34 0:00 postgres: ser\n>> <entry><literal>SafeSnapshot</literal></entry>\n>> <entry>Waiting for a snapshot for a <literal>READ ONLY\n>> DEFERRABLE</literal> transaction.</entry>\n>> </row>\n>> - <row>\n>> - <entry><literal>SyncRep</literal></entry>\n>> - <entry>Waiting for confirmation from remote server during\n>> synchronous replication.</entry>\n>> - </row>\n>> <row>\n>> <entry><literal>FdwXactResolution</literal></entry>\n>> <entry>Waiting for all foreign transaction participants to\n>> be resolved during atomic commit among foreign servers.</entry>\n>> </row>\n>> + <row>\n>> + <entry><literal>SyncRep</literal></entry>\n>> + <entry>Waiting for confirmation from remote server during\n>> synchronous replication.</entry>\n>> + </row>\n>> <row>\n>> <entry morerows=\"4\"><literal>Timeout</literal></entry>\n>> <entry><literal>BaseBackupThrottle</literal></entry>\n>>\n>> We need to move the entry of FdwXactResolution to right before\n>> Hash/Batch/Allocating for alphabetical order.\n>\n>\n> Agreed!\n>>\n>>\n>> I've incorporated your changes I agreed with to my local branch and\n>> will incorporate other changes after discussion. I'll also do more\n>> test and self-review and will submit the latest version patch.\n>>\n>\n> Meanwhile, I found a couple of more small issues, One is the break statement missing\n> i n pgstat_get_wait_ipc() and secondly fdwxact_relaunch_resolvers()\n> could return un-initialized value.\n> I am attaching a small patch for these changes that can be applied on top of existing\n> patches.\n\nThank you for the patch!\n\nI'm updating the patches because current behavior in error case would\nnot be good. For example, when an error occurs in the prepare phase,\nprepared transactions are left as in-doubt transaction. And these\ntransactions are not handled by the resolver process. That means that\na user could need to resolve these transactions manually every abort\ntime, which is not good. In abort case, I think that prepared\ntransactions can be resolved by the backend itself, rather than\nleaving them for the resolver. I'll submit the updated patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 Apr 2020 20:43:23 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 30 Apr 2020 at 20:43, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 28 Apr 2020 at 19:37, Muhammad Usama <m.usama@gmail.com> wrote:\n> >\n> >\n> >\n> > On Wed, Apr 8, 2020 at 11:16 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Fri, 27 Mar 2020 at 22:06, Muhammad Usama <m.usama@gmail.com> wrote:\n> >> >\n> >> > Hi Sawada San,\n> >> >\n> >> > I have been further reviewing and testing the transaction involving multiple server patches.\n> >> > Overall the patches are working as expected bar a few important exceptions.\n> >> > So as discussed over the call I have fixed the issues I found during the testing\n> >> > and also rebased the patches with the current head of the master branch.\n> >> > So can you please have a look at the attached updated patches.\n> >>\n> >> Thank you for reviewing and updating the patch!\n> >>\n> >> >\n> >> > Below is the list of changes I have made on top of V18 patches.\n> >> >\n> >> > 1- In register_fdwxact(), As we are just storing the callback function pointers from\n> >> > FdwRoutine in fdw_part structure, So I think we can avoid calling\n> >> > GetFdwRoutineByServerId() in TopMemoryContext.\n> >> > So I have moved the MemoryContextSwitch to TopMemoryContext after the\n> >> > GetFdwRoutineByServerId() call.\n> >>\n> >> Agreed.\n> >>\n> >> >\n> >> >\n> >> > 2- If PrepareForeignTransaction functionality is not present in some FDW then\n> >> > during the registration process we should only set the XACT_FLAGS_FDWNOPREPARE\n> >> > transaction flag if the modified flag is also set for that server. As for the server that has\n> >> > not done any data modification within the transaction we do not do two-phase commit anyway.\n> >>\n> >> Agreed.\n> >>\n> >> >\n> >> > 3- I have moved the foreign_twophase_commit in sample file after\n> >> > max_foreign_transaction_resolvers because the default value of max_foreign_transaction_resolvers\n> >> > is 0 and enabling the foreign_twophase_commit produces an error with default\n> >> > configuration parameter positioning in postgresql.conf\n> >> > Also, foreign_twophase_commit configuration was missing the comments\n> >> > about allowed values in the sample config file.\n> >>\n> >> Sounds good. Agreed.\n> >>\n> >> >\n> >> > 4- Setting ForeignTwophaseCommitIsRequired in is_foreign_twophase_commit_required()\n> >> > function does not seem to be the correct place. The reason being, even when\n> >> > is_foreign_twophase_commit_required() returns true after setting ForeignTwophaseCommitIsRequired\n> >> > to true, we could still end up not using the two-phase commit in the case when some server does\n> >> > not support two-phase commit and foreign_twophase_commit is set to FOREIGN_TWOPHASE_COMMIT_PREFER\n> >> > mode. So I have moved the ForeignTwophaseCommitIsRequired assignment to PreCommit_FdwXacts()\n> >> > function after doing the prepare transaction.\n> >>\n> >> Agreed.\n> >>\n> >> >\n> >> > 6- In prefer mode, we commit the transaction in single-phase if the server does not support\n> >> > the two-phase commit. But instead of doing the single-phase commit right away,\n> >> > IMHO the better way is to wait until all the two-phase transactions are successfully prepared\n> >> > on servers that support the two-phase. Since an error during a \"PREPARE\" stage would\n> >> > rollback the transaction and in that case, we would end up with committed transactions on\n> >> > the server that lacks the support of the two-phase commit.\n> >>\n> >> When an error occurred before the local commit, a 2pc-unsupported\n> >> server could be rolled back or committed depending on the error\n> >> timing. On the other hand all 2pc-supported servers are always rolled\n> >> back when an error occurred before the local commit. Therefore even if\n> >> we change the order of COMMIT and PREPARE it is still possible that we\n> >> will end up committing the part of 2pc-unsupported servers while\n> >> rolling back others including 2pc-supported servers.\n> >>\n> >> I guess the motivation of your change is that since errors are likely\n> >> to happen during executing PREPARE on foreign servers, we can minimize\n> >> the possibility of rolling back 2pc-unsupported servers by deferring\n> >> the commit of 2pc-unsupported server as much as possible. Is that\n> >> right?\n> >\n> >\n> > Yes, that is correct. The idea of doing the COMMIT on NON-2pc-supported servers\n> > after all the PREPAREs are successful is to minimize the chances of partial commits.\n> > And as you mentioned there will still be chances of getting a partial commit even with\n> > this approach but the probability of that would be less than what it is with the\n> > current sequence.\n> >\n> >\n> >>\n> >>\n> >> > So I have modified the flow a little bit and instead of doing a one-phase commit right away\n> >> > the servers that do not support a two-phase commit is added to another list and that list is\n> >> > processed after once we have successfully prepared all the transactions on two-phase supported\n> >> > foreign servers. Although this technique is also not bulletproof, still it is better than doing\n> >> > the one-phase commits before doing the PREPAREs.\n> >>\n> >> Hmm the current logic seems complex. Maybe we can just reverse the\n> >> order of COMMIT and PREPARE; do PREPARE on all 2pc-supported and\n> >> modified servers first and then do COMMIT on others?\n> >\n> >\n> > Agreed, seems reasonable.\n> >>\n> >>\n> >> >\n> >> > Also, I think we can improve on this one by throwing an error even in PREFER\n> >> > mode if there is more than one server that had data modified within the transaction\n> >> > and lacks the two-phase commit support.\n> >> >\n> >>\n> >> IIUC the concept of PREFER mode is that the transaction uses 2pc only\n> >> for 2pc-supported servers. IOW, even if the transaction modifies on a\n> >> 2pc-unsupported server we can proceed with the commit if in PREFER\n> >> mode, which cannot if in REQUIRED mode. What is the motivation of your\n> >> above idea?\n> >\n> >\n> > I was thinking that we could change the behavior of PREFER mode such that we only allow\n> > to COMMIT the transaction if the transaction needs to do a single-phase commit on one\n> > server only. That way we can ensure that we would never end up with partial commit.\n> >\n>\n> I think it's good to avoid a partial commit by using your idea but if\n> we want to avoid a partial commit we can use the 'required' mode,\n> which requires all participant servers to support 2pc. We throw an\n> error if participant servers include even one 2pc-unsupported server\n> is modified within the transaction. Of course if the participant node\n> is only one 2pc-unsupported server it can use 1pc even in the\n> 'required' mode.\n>\n> > One Idea in this regards would be to switch the local transaction to commit using 2pc\n> > if there is a total of only one foreign server that does not support the 2pc in the transaction,\n> > ensuring that 1-pc commit servers should always be less than or equal to 1. and if there are more\n> > than one foreign server requires 1-pc then we just throw an error.\n>\n> I might be missing your point but I suppose this idea is to do\n> something like the following?\n>\n> 1. prepare the local transaction\n> 2. commit the foreign transaction on 2pc-unsupported server\n> 3. commit the prepared local transaction\n>\n> >\n> > However having said that, I am not 100% sure if its a good or an acceptable Idea, and\n> > I am okay with continuing with the current behavior of PREFER mode if we put it in the\n> > document that this mode can cause a partial commit.\n>\n> There will three types of servers: (a) a server doesn't support any\n> transaction API, (b) a server supports only commit and rollback API\n> and (c) a server supports all APIs (commit, rollback and prepare).\n> Currently postgres transaction manager manages only server-(b) and\n> server-(c), adds them to FdwXactParticipants. I'm considering changing\n> the code so that it adds also server-(a) to FdwXactParticipants, in\n> order to track the number of server-(a) involved in the transaction.\n> But it doesn't insert FdwXact entry for it, and manage transactions on\n> these servers.\n>\n> The reason is this; if we want to have the 'required' mode strictly\n> require all participant servers to support 2pc, we should use 2pc when\n> (# of server-(a) + # of server-(b) + # of server-(c)) >= 2. But since\n> currently we just track the modification on a server-(a) by a flag we\n> cannot handle the case where two server-(a) are modified in the\n> transaction. On the other hand, if we don't consider server-(a) the\n> transaction could end up with a partial commit when a server-(a)\n> participates in the transaction. Therefore I'm thinking of the above\n> change so that the transaction manager can ensure that a partial\n> commit doesn't happen in the 'required' mode. What do you think?\n>\n> >\n> >>\n> >> > 7- Added a pfree() and list_free_deep() in PreCommit_FdwXacts() to reclaim the\n> >> > memory if fdw_part is removed from the list\n> >>\n> >> I think at the end of the transaction we free entries of\n> >> FdwXactParticipants list and set FdwXactParticipants to NIL. Why do we\n> >> need to do that in PreCommit_FdwXacts()?\n> >\n> >\n> > Correct me if I am wrong, The fdw_part structures are created in TopMemoryContext\n> > and if that fdw_part structure is removed from the list at pre_commit stage\n> > (because we did 1-PC COMMIT on it) then it would leak memory.\n>\n> The fdw_part structures are created in TopTransactionContext so these\n> are freed at the end of the transaction.\n>\n> >\n> >>\n> >> >\n> >> > 8- The function FdwXactWaitToBeResolved() was bailing out as soon as it finds\n> >> > (FdwXactParticipants == NIL). The problem with that was in the case of\n> >> > \"COMMIT/ROLLBACK PREPARED\" we always get FdwXactParticipants = NIL and\n> >> > effectively the foreign prepared transactions(if any) associated with locally\n> >> > prepared transactions were never getting resolved automatically.\n> >> >\n> >> >\n> >> > postgres=# BEGIN;\n> >> > BEGIN\n> >> > INSERT INTO test_local VALUES ( 2, 'TWO');\n> >> > INSERT 0 1\n> >> > INSERT INTO test_foreign_s1 VALUES ( 2, 'TWO');\n> >> > INSERT 0 1\n> >> > INSERT INTO test_foreign_s2 VALUES ( 2, 'TWO');\n> >> > INSERT 0 1\n> >> > postgres=*# PREPARE TRANSACTION 'local_prepared';\n> >> > PREPARE TRANSACTION\n> >> >\n> >> > postgres=# select * from pg_foreign_xacts ;\n> >> > dbid | xid | serverid | userid | status | in_doubt | identifier\n> >> > -------+-----+----------+--------+----------+----------+----------------------------\n> >> > 12929 | 515 | 16389 | 10 | prepared | f | fx_1339567411_515_16389_10\n> >> > 12929 | 515 | 16391 | 10 | prepared | f | fx_1963224020_515_16391_10\n> >> > (2 rows)\n> >> >\n> >> > -- Now commit the prepared transaction\n> >> >\n> >> > postgres=# COMMIT PREPARED 'local_prepared';\n> >> >\n> >> > COMMIT PREPARED\n> >> >\n> >> > --Foreign prepared transactions associated with 'local_prepared' not resolved\n> >> >\n> >> > postgres=#\n> >> >\n> >> > postgres=# select * from pg_foreign_xacts ;\n> >> > dbid | xid | serverid | userid | status | in_doubt | identifier\n> >> > -------+-----+----------+--------+----------+----------+----------------------------\n> >> > 12929 | 515 | 16389 | 10 | prepared | f | fx_1339567411_515_16389_10\n> >> > 12929 | 515 | 16391 | 10 | prepared | f | fx_1963224020_515_16391_10\n> >> > (2 rows)\n> >> >\n> >> >\n> >> > So to fix this in case of the two-phase transaction, the function checks the existence\n> >> > of associated foreign prepared transactions before bailing out.\n> >> >\n> >>\n> >> Good catch. But looking at your change, we should not accept the case\n> >> where FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) ==\n> >> false.\n> >>\n> >> if (FdwXactParticipants == NIL)\n> >> {\n> >> /*\n> >> * If we are here because of COMMIT/ROLLBACK PREPARED then the\n> >> * FdwXactParticipants list would be empty. So we need to\n> >> * see if there are any foreign prepared transactions exists\n> >> * for this prepared transaction\n> >> */\n> >> if (TwoPhaseExists(wait_xid))\n> >> {\n> >> List *foreign_trans = NIL;\n> >>\n> >> foreign_trans = get_fdwxacts(MyDatabaseId,\n> >> wait_xid, InvalidOid, InvalidOid,\n> >> false, false, true);\n> >>\n> >> if (foreign_trans == NIL)\n> >> return;\n> >> list_free(foreign_trans);\n> >> }\n> >> }\n> >>\n> >\n> > Sorry my bad, its a mistake on my part. we should just return from the function when\n> > FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) == false.\n> >\n> > if (TwoPhaseExists(wait_xid))\n> > {\n> > List *foreign_trans = NIL;\n> > foreign_trans = get_fdwxacts(MyDatabaseId, wait_xid, InvalidOid, InvalidOid,\n> > false, false, true);\n> >\n> > if (foreign_trans == NIL)\n> > return;\n> > list_free(foreign_trans);\n> > }\n> > else\n> > return;\n> >\n> >>\n> >> > 9- In function XlogReadFdwXactData() XLogBeginRead call was missing before XLogReadRecord()\n> >> > that was causing the crash during recovery.\n> >>\n> >> Agreed.\n> >>\n> >> >\n> >> > 10- incorporated set_ps_display() signature change.\n> >>\n> >> Thanks.\n> >>\n> >> Regarding other changes you did in v19 patch, I have some comments:\n> >>\n> >> 1.\n> >> + ereport(LOG,\n> >> + (errmsg(\"trying to %s the foreign transaction\n> >> associated with transaction %u on server %u\",\n> >> + fdwxact->status ==\n> >> FDWXACT_STATUS_COMMITTING?\"COMMIT\":\"ABORT\",\n> >> + fdwxact->local_xid,\n> >> fdwxact->serverid)));\n> >> +\n> >>\n> >> Why do we need to emit LOG message in pg_resolve_foreign_xact() SQL function?\n> >\n> >\n> > That change was not intended to get into the patch file. I had done it during testing to\n> > quickly get info on which way the transaction is going to be resolved.\n> >\n> >>\n> >> 2.\n> >> diff --git a/src/bin/pg_waldump/fdwxactdesc.c b/src/bin/pg_waldump/fdwxactdesc.c\n> >> deleted file mode 120000\n> >> index ce8c21880c..0000000000\n> >> --- a/src/bin/pg_waldump/fdwxactdesc.c\n> >> +++ /dev/null\n> >> @@ -1 +0,0 @@\n> >> -../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n> >> \\ No newline at end of file\n> >> diff --git a/src/bin/pg_waldump/fdwxactdesc.c b/src/bin/pg_waldump/fdwxactdesc.c\n> >> new file mode 100644\n> >> index 0000000000..ce8c21880c\n> >> --- /dev/null\n> >> +++ b/src/bin/pg_waldump/fdwxactdesc.c\n> >> @@ -0,0 +1 @@\n> >> +../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n> >>\n> >> We need to remove src/bin/pg_waldump/fdwxactdesc.c from the patch.\n> >\n> >\n> > Again sorry! that was an oversight on my part.\n> >\n> >>\n> >> 3.\n> >> --- a/doc/src/sgml/monitoring.sgml\n> >> +++ b/doc/src/sgml/monitoring.sgml\n> >> @@ -1526,14 +1526,14 @@ postgres 27093 0.0 0.0 30096 2752 ?\n> >> Ss 11:34 0:00 postgres: ser\n> >> <entry><literal>SafeSnapshot</literal></entry>\n> >> <entry>Waiting for a snapshot for a <literal>READ ONLY\n> >> DEFERRABLE</literal> transaction.</entry>\n> >> </row>\n> >> - <row>\n> >> - <entry><literal>SyncRep</literal></entry>\n> >> - <entry>Waiting for confirmation from remote server during\n> >> synchronous replication.</entry>\n> >> - </row>\n> >> <row>\n> >> <entry><literal>FdwXactResolution</literal></entry>\n> >> <entry>Waiting for all foreign transaction participants to\n> >> be resolved during atomic commit among foreign servers.</entry>\n> >> </row>\n> >> + <row>\n> >> + <entry><literal>SyncRep</literal></entry>\n> >> + <entry>Waiting for confirmation from remote server during\n> >> synchronous replication.</entry>\n> >> + </row>\n> >> <row>\n> >> <entry morerows=\"4\"><literal>Timeout</literal></entry>\n> >> <entry><literal>BaseBackupThrottle</literal></entry>\n> >>\n> >> We need to move the entry of FdwXactResolution to right before\n> >> Hash/Batch/Allocating for alphabetical order.\n> >\n> >\n> > Agreed!\n> >>\n> >>\n> >> I've incorporated your changes I agreed with to my local branch and\n> >> will incorporate other changes after discussion. I'll also do more\n> >> test and self-review and will submit the latest version patch.\n> >>\n> >\n> > Meanwhile, I found a couple of more small issues, One is the break statement missing\n> > i n pgstat_get_wait_ipc() and secondly fdwxact_relaunch_resolvers()\n> > could return un-initialized value.\n> > I am attaching a small patch for these changes that can be applied on top of existing\n> > patches.\n>\n> Thank you for the patch!\n>\n> I'm updating the patches because current behavior in error case would\n> not be good. For example, when an error occurs in the prepare phase,\n> prepared transactions are left as in-doubt transaction. And these\n> transactions are not handled by the resolver process. That means that\n> a user could need to resolve these transactions manually every abort\n> time, which is not good. In abort case, I think that prepared\n> transactions can be resolved by the backend itself, rather than\n> leaving them for the resolver. I'll submit the updated patch.\n>\n\nI've attached the latest version patch set which includes some changes\nfrom the previous version:\n\n* I've added regression tests that test all types of FDW\nimplementations. There are three types of FDW: FDW doesn't support any\ntransaction APIs, FDW supports only commit and rollback APIs and FDW\nsupports all (prepare, commit and rollback) APISs.\nsrc/test/module/test_fdwxact contains those FDW implementations for\ntests, and test some cases where a transaction reads/writes data on\nvarious types of foreign servers.\n* Also test_fdwxact has TAP tests that check failure cases. The test\nFDW implementation has the ability to inject error or panic into\nprepare or commit phase. Using it the TAP test checks if distributed\ntransactions can be committed or rolled back even in failure cases.\n* When foreign_twophase_commit = 'required', the transaction commit\nfails if the transaction modified data on even one server not\nsupporting prepare API. Previously, we used to ignore servers that\ndon't support any transaction API but we check them to strictly\nrequire all involved foreign servers to support all transaction APIs.\n* Transaction resolver process resolves in-doubt transactions automatically.\n* Incorporated comments from Muhammad Usama.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 12 May 2020 15:44:54 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, May 12, 2020 at 11:45 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Thu, 30 Apr 2020 at 20:43, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 28 Apr 2020 at 19:37, Muhammad Usama <m.usama@gmail.com> wrote:\n> > >\n> > >\n> > >\n> > > On Wed, Apr 8, 2020 at 11:16 AM Masahiko Sawada <\n> masahiko.sawada@2ndquadrant.com> wrote:\n> > >>\n> > >> On Fri, 27 Mar 2020 at 22:06, Muhammad Usama <m.usama@gmail.com>\n> wrote:\n> > >> >\n> > >> > Hi Sawada San,\n> > >> >\n> > >> > I have been further reviewing and testing the transaction involving\n> multiple server patches.\n> > >> > Overall the patches are working as expected bar a few important\n> exceptions.\n> > >> > So as discussed over the call I have fixed the issues I found\n> during the testing\n> > >> > and also rebased the patches with the current head of the master\n> branch.\n> > >> > So can you please have a look at the attached updated patches.\n> > >>\n> > >> Thank you for reviewing and updating the patch!\n> > >>\n> > >> >\n> > >> > Below is the list of changes I have made on top of V18 patches.\n> > >> >\n> > >> > 1- In register_fdwxact(), As we are just storing the callback\n> function pointers from\n> > >> > FdwRoutine in fdw_part structure, So I think we can avoid calling\n> > >> > GetFdwRoutineByServerId() in TopMemoryContext.\n> > >> > So I have moved the MemoryContextSwitch to TopMemoryContext after\n> the\n> > >> > GetFdwRoutineByServerId() call.\n> > >>\n> > >> Agreed.\n> > >>\n> > >> >\n> > >> >\n> > >> > 2- If PrepareForeignTransaction functionality is not present in\n> some FDW then\n> > >> > during the registration process we should only set the\n> XACT_FLAGS_FDWNOPREPARE\n> > >> > transaction flag if the modified flag is also set for that server.\n> As for the server that has\n> > >> > not done any data modification within the transaction we do not do\n> two-phase commit anyway.\n> > >>\n> > >> Agreed.\n> > >>\n> > >> >\n> > >> > 3- I have moved the foreign_twophase_commit in sample file after\n> > >> > max_foreign_transaction_resolvers because the default value of\n> max_foreign_transaction_resolvers\n> > >> > is 0 and enabling the foreign_twophase_commit produces an error\n> with default\n> > >> > configuration parameter positioning in postgresql.conf\n> > >> > Also, foreign_twophase_commit configuration was missing the comments\n> > >> > about allowed values in the sample config file.\n> > >>\n> > >> Sounds good. Agreed.\n> > >>\n> > >> >\n> > >> > 4- Setting ForeignTwophaseCommitIsRequired in\n> is_foreign_twophase_commit_required()\n> > >> > function does not seem to be the correct place. The reason being,\n> even when\n> > >> > is_foreign_twophase_commit_required() returns true after setting\n> ForeignTwophaseCommitIsRequired\n> > >> > to true, we could still end up not using the two-phase commit in\n> the case when some server does\n> > >> > not support two-phase commit and foreign_twophase_commit is set to\n> FOREIGN_TWOPHASE_COMMIT_PREFER\n> > >> > mode. So I have moved the ForeignTwophaseCommitIsRequired\n> assignment to PreCommit_FdwXacts()\n> > >> > function after doing the prepare transaction.\n> > >>\n> > >> Agreed.\n> > >>\n> > >> >\n> > >> > 6- In prefer mode, we commit the transaction in single-phase if the\n> server does not support\n> > >> > the two-phase commit. But instead of doing the single-phase commit\n> right away,\n> > >> > IMHO the better way is to wait until all the two-phase transactions\n> are successfully prepared\n> > >> > on servers that support the two-phase. Since an error during a\n> \"PREPARE\" stage would\n> > >> > rollback the transaction and in that case, we would end up with\n> committed transactions on\n> > >> > the server that lacks the support of the two-phase commit.\n> > >>\n> > >> When an error occurred before the local commit, a 2pc-unsupported\n> > >> server could be rolled back or committed depending on the error\n> > >> timing. On the other hand all 2pc-supported servers are always rolled\n> > >> back when an error occurred before the local commit. Therefore even if\n> > >> we change the order of COMMIT and PREPARE it is still possible that we\n> > >> will end up committing the part of 2pc-unsupported servers while\n> > >> rolling back others including 2pc-supported servers.\n> > >>\n> > >> I guess the motivation of your change is that since errors are likely\n> > >> to happen during executing PREPARE on foreign servers, we can minimize\n> > >> the possibility of rolling back 2pc-unsupported servers by deferring\n> > >> the commit of 2pc-unsupported server as much as possible. Is that\n> > >> right?\n> > >\n> > >\n> > > Yes, that is correct. The idea of doing the COMMIT on\n> NON-2pc-supported servers\n> > > after all the PREPAREs are successful is to minimize the chances of\n> partial commits.\n> > > And as you mentioned there will still be chances of getting a partial\n> commit even with\n> > > this approach but the probability of that would be less than what it\n> is with the\n> > > current sequence.\n> > >\n> > >\n> > >>\n> > >>\n> > >> > So I have modified the flow a little bit and instead of doing a\n> one-phase commit right away\n> > >> > the servers that do not support a two-phase commit is added to\n> another list and that list is\n> > >> > processed after once we have successfully prepared all the\n> transactions on two-phase supported\n> > >> > foreign servers. Although this technique is also not bulletproof,\n> still it is better than doing\n> > >> > the one-phase commits before doing the PREPAREs.\n> > >>\n> > >> Hmm the current logic seems complex. Maybe we can just reverse the\n> > >> order of COMMIT and PREPARE; do PREPARE on all 2pc-supported and\n> > >> modified servers first and then do COMMIT on others?\n> > >\n> > >\n> > > Agreed, seems reasonable.\n> > >>\n> > >>\n> > >> >\n> > >> > Also, I think we can improve on this one by throwing an error even\n> in PREFER\n> > >> > mode if there is more than one server that had data modified within\n> the transaction\n> > >> > and lacks the two-phase commit support.\n> > >> >\n> > >>\n> > >> IIUC the concept of PREFER mode is that the transaction uses 2pc only\n> > >> for 2pc-supported servers. IOW, even if the transaction modifies on a\n> > >> 2pc-unsupported server we can proceed with the commit if in PREFER\n> > >> mode, which cannot if in REQUIRED mode. What is the motivation of your\n> > >> above idea?\n> > >\n> > >\n> > > I was thinking that we could change the behavior of PREFER mode such\n> that we only allow\n> > > to COMMIT the transaction if the transaction needs to do a\n> single-phase commit on one\n> > > server only. That way we can ensure that we would never end up with\n> partial commit.\n> > >\n> >\n> > I think it's good to avoid a partial commit by using your idea but if\n> > we want to avoid a partial commit we can use the 'required' mode,\n> > which requires all participant servers to support 2pc. We throw an\n> > error if participant servers include even one 2pc-unsupported server\n> > is modified within the transaction. Of course if the participant node\n> > is only one 2pc-unsupported server it can use 1pc even in the\n> > 'required' mode.\n> >\n> > > One Idea in this regards would be to switch the local transaction to\n> commit using 2pc\n> > > if there is a total of only one foreign server that does not support\n> the 2pc in the transaction,\n> > > ensuring that 1-pc commit servers should always be less than or equal\n> to 1. and if there are more\n> > > than one foreign server requires 1-pc then we just throw an error.\n> >\n> > I might be missing your point but I suppose this idea is to do\n> > something like the following?\n> >\n> > 1. prepare the local transaction\n> > 2. commit the foreign transaction on 2pc-unsupported server\n> > 3. commit the prepared local transaction\n> >\n> > >\n> > > However having said that, I am not 100% sure if its a good or an\n> acceptable Idea, and\n> > > I am okay with continuing with the current behavior of PREFER mode if\n> we put it in the\n> > > document that this mode can cause a partial commit.\n> >\n> > There will three types of servers: (a) a server doesn't support any\n> > transaction API, (b) a server supports only commit and rollback API\n> > and (c) a server supports all APIs (commit, rollback and prepare).\n> > Currently postgres transaction manager manages only server-(b) and\n> > server-(c), adds them to FdwXactParticipants. I'm considering changing\n> > the code so that it adds also server-(a) to FdwXactParticipants, in\n> > order to track the number of server-(a) involved in the transaction.\n> > But it doesn't insert FdwXact entry for it, and manage transactions on\n> > these servers.\n> >\n> > The reason is this; if we want to have the 'required' mode strictly\n> > require all participant servers to support 2pc, we should use 2pc when\n> > (# of server-(a) + # of server-(b) + # of server-(c)) >= 2. But since\n> > currently we just track the modification on a server-(a) by a flag we\n> > cannot handle the case where two server-(a) are modified in the\n> > transaction. On the other hand, if we don't consider server-(a) the\n> > transaction could end up with a partial commit when a server-(a)\n> > participates in the transaction. Therefore I'm thinking of the above\n> > change so that the transaction manager can ensure that a partial\n> > commit doesn't happen in the 'required' mode. What do you think?\n> >\n> > >\n> > >>\n> > >> > 7- Added a pfree() and list_free_deep() in PreCommit_FdwXacts() to\n> reclaim the\n> > >> > memory if fdw_part is removed from the list\n> > >>\n> > >> I think at the end of the transaction we free entries of\n> > >> FdwXactParticipants list and set FdwXactParticipants to NIL. Why do we\n> > >> need to do that in PreCommit_FdwXacts()?\n> > >\n> > >\n> > > Correct me if I am wrong, The fdw_part structures are created in\n> TopMemoryContext\n> > > and if that fdw_part structure is removed from the list at pre_commit\n> stage\n> > > (because we did 1-PC COMMIT on it) then it would leak memory.\n> >\n> > The fdw_part structures are created in TopTransactionContext so these\n> > are freed at the end of the transaction.\n> >\n> > >\n> > >>\n> > >> >\n> > >> > 8- The function FdwXactWaitToBeResolved() was bailing out as soon\n> as it finds\n> > >> > (FdwXactParticipants == NIL). The problem with that was in the case\n> of\n> > >> > \"COMMIT/ROLLBACK PREPARED\" we always get FdwXactParticipants = NIL\n> and\n> > >> > effectively the foreign prepared transactions(if any) associated\n> with locally\n> > >> > prepared transactions were never getting resolved automatically.\n> > >> >\n> > >> >\n> > >> > postgres=# BEGIN;\n> > >> > BEGIN\n> > >> > INSERT INTO test_local VALUES ( 2, 'TWO');\n> > >> > INSERT 0 1\n> > >> > INSERT INTO test_foreign_s1 VALUES ( 2, 'TWO');\n> > >> > INSERT 0 1\n> > >> > INSERT INTO test_foreign_s2 VALUES ( 2, 'TWO');\n> > >> > INSERT 0 1\n> > >> > postgres=*# PREPARE TRANSACTION 'local_prepared';\n> > >> > PREPARE TRANSACTION\n> > >> >\n> > >> > postgres=# select * from pg_foreign_xacts ;\n> > >> > dbid | xid | serverid | userid | status | in_doubt |\n> identifier\n> > >> >\n> -------+-----+----------+--------+----------+----------+----------------------------\n> > >> > 12929 | 515 | 16389 | 10 | prepared | f |\n> fx_1339567411_515_16389_10\n> > >> > 12929 | 515 | 16391 | 10 | prepared | f |\n> fx_1963224020_515_16391_10\n> > >> > (2 rows)\n> > >> >\n> > >> > -- Now commit the prepared transaction\n> > >> >\n> > >> > postgres=# COMMIT PREPARED 'local_prepared';\n> > >> >\n> > >> > COMMIT PREPARED\n> > >> >\n> > >> > --Foreign prepared transactions associated with 'local_prepared'\n> not resolved\n> > >> >\n> > >> > postgres=#\n> > >> >\n> > >> > postgres=# select * from pg_foreign_xacts ;\n> > >> > dbid | xid | serverid | userid | status | in_doubt |\n> identifier\n> > >> >\n> -------+-----+----------+--------+----------+----------+----------------------------\n> > >> > 12929 | 515 | 16389 | 10 | prepared | f |\n> fx_1339567411_515_16389_10\n> > >> > 12929 | 515 | 16391 | 10 | prepared | f |\n> fx_1963224020_515_16391_10\n> > >> > (2 rows)\n> > >> >\n> > >> >\n> > >> > So to fix this in case of the two-phase transaction, the function\n> checks the existence\n> > >> > of associated foreign prepared transactions before bailing out.\n> > >> >\n> > >>\n> > >> Good catch. But looking at your change, we should not accept the case\n> > >> where FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) ==\n> > >> false.\n> > >>\n> > >> if (FdwXactParticipants == NIL)\n> > >> {\n> > >> /*\n> > >> * If we are here because of COMMIT/ROLLBACK PREPARED\n> then the\n> > >> * FdwXactParticipants list would be empty. So we need\n> to\n> > >> * see if there are any foreign prepared transactions\n> exists\n> > >> * for this prepared transaction\n> > >> */\n> > >> if (TwoPhaseExists(wait_xid))\n> > >> {\n> > >> List *foreign_trans = NIL;\n> > >>\n> > >> foreign_trans = get_fdwxacts(MyDatabaseId,\n> > >> wait_xid, InvalidOid, InvalidOid,\n> > >> false, false, true);\n> > >>\n> > >> if (foreign_trans == NIL)\n> > >> return;\n> > >> list_free(foreign_trans);\n> > >> }\n> > >> }\n> > >>\n> > >\n> > > Sorry my bad, its a mistake on my part. we should just return from the\n> function when\n> > > FdwXactParticipants == NULL but TwoPhaseExists(wait_xid) == false.\n> > >\n> > > if (TwoPhaseExists(wait_xid))\n> > > {\n> > > List *foreign_trans = NIL;\n> > > foreign_trans = get_fdwxacts(MyDatabaseId, wait_xid,\n> InvalidOid, InvalidOid,\n> > > false, false, true);\n> > >\n> > > if (foreign_trans == NIL)\n> > > return;\n> > > list_free(foreign_trans);\n> > > }\n> > > else\n> > > return;\n> > >\n> > >>\n> > >> > 9- In function XlogReadFdwXactData() XLogBeginRead call was missing\n> before XLogReadRecord()\n> > >> > that was causing the crash during recovery.\n> > >>\n> > >> Agreed.\n> > >>\n> > >> >\n> > >> > 10- incorporated set_ps_display() signature change.\n> > >>\n> > >> Thanks.\n> > >>\n> > >> Regarding other changes you did in v19 patch, I have some comments:\n> > >>\n> > >> 1.\n> > >> + ereport(LOG,\n> > >> + (errmsg(\"trying to %s the foreign transaction\n> > >> associated with transaction %u on server %u\",\n> > >> + fdwxact->status ==\n> > >> FDWXACT_STATUS_COMMITTING?\"COMMIT\":\"ABORT\",\n> > >> + fdwxact->local_xid,\n> > >> fdwxact->serverid)));\n> > >> +\n> > >>\n> > >> Why do we need to emit LOG message in pg_resolve_foreign_xact() SQL\n> function?\n> > >\n> > >\n> > > That change was not intended to get into the patch file. I had done it\n> during testing to\n> > > quickly get info on which way the transaction is going to be resolved.\n> > >\n> > >>\n> > >> 2.\n> > >> diff --git a/src/bin/pg_waldump/fdwxactdesc.c\n> b/src/bin/pg_waldump/fdwxactdesc.c\n> > >> deleted file mode 120000\n> > >> index ce8c21880c..0000000000\n> > >> --- a/src/bin/pg_waldump/fdwxactdesc.c\n> > >> +++ /dev/null\n> > >> @@ -1 +0,0 @@\n> > >> -../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n> > >> \\ No newline at end of file\n> > >> diff --git a/src/bin/pg_waldump/fdwxactdesc.c\n> b/src/bin/pg_waldump/fdwxactdesc.c\n> > >> new file mode 100644\n> > >> index 0000000000..ce8c21880c\n> > >> --- /dev/null\n> > >> +++ b/src/bin/pg_waldump/fdwxactdesc.c\n> > >> @@ -0,0 +1 @@\n> > >> +../../../src/backend/access/rmgrdesc/fdwxactdesc.c\n> > >>\n> > >> We need to remove src/bin/pg_waldump/fdwxactdesc.c from the patch.\n> > >\n> > >\n> > > Again sorry! that was an oversight on my part.\n> > >\n> > >>\n> > >> 3.\n> > >> --- a/doc/src/sgml/monitoring.sgml\n> > >> +++ b/doc/src/sgml/monitoring.sgml\n> > >> @@ -1526,14 +1526,14 @@ postgres 27093 0.0 0.0 30096 2752 ?\n> > >> Ss 11:34 0:00 postgres: ser\n> > >> <entry><literal>SafeSnapshot</literal></entry>\n> > >> <entry>Waiting for a snapshot for a <literal>READ ONLY\n> > >> DEFERRABLE</literal> transaction.</entry>\n> > >> </row>\n> > >> - <row>\n> > >> - <entry><literal>SyncRep</literal></entry>\n> > >> - <entry>Waiting for confirmation from remote server during\n> > >> synchronous replication.</entry>\n> > >> - </row>\n> > >> <row>\n> > >> <entry><literal>FdwXactResolution</literal></entry>\n> > >> <entry>Waiting for all foreign transaction participants to\n> > >> be resolved during atomic commit among foreign servers.</entry>\n> > >> </row>\n> > >> + <row>\n> > >> + <entry><literal>SyncRep</literal></entry>\n> > >> + <entry>Waiting for confirmation from remote server during\n> > >> synchronous replication.</entry>\n> > >> + </row>\n> > >> <row>\n> > >> <entry morerows=\"4\"><literal>Timeout</literal></entry>\n> > >> <entry><literal>BaseBackupThrottle</literal></entry>\n> > >>\n> > >> We need to move the entry of FdwXactResolution to right before\n> > >> Hash/Batch/Allocating for alphabetical order.\n> > >\n> > >\n> > > Agreed!\n> > >>\n> > >>\n> > >> I've incorporated your changes I agreed with to my local branch and\n> > >> will incorporate other changes after discussion. I'll also do more\n> > >> test and self-review and will submit the latest version patch.\n> > >>\n> > >\n> > > Meanwhile, I found a couple of more small issues, One is the break\n> statement missing\n> > > i n pgstat_get_wait_ipc() and secondly fdwxact_relaunch_resolvers()\n> > > could return un-initialized value.\n> > > I am attaching a small patch for these changes that can be applied on\n> top of existing\n> > > patches.\n> >\n> > Thank you for the patch!\n> >\n> > I'm updating the patches because current behavior in error case would\n> > not be good. For example, when an error occurs in the prepare phase,\n> > prepared transactions are left as in-doubt transaction. And these\n> > transactions are not handled by the resolver process. That means that\n> > a user could need to resolve these transactions manually every abort\n> > time, which is not good. In abort case, I think that prepared\n> > transactions can be resolved by the backend itself, rather than\n> > leaving them for the resolver. I'll submit the updated patch.\n> >\n>\n> I've attached the latest version patch set which includes some changes\n> from the previous version:\n>\n> * I've added regression tests that test all types of FDW\n> implementations. There are three types of FDW: FDW doesn't support any\n> transaction APIs, FDW supports only commit and rollback APIs and FDW\n> supports all (prepare, commit and rollback) APISs.\n> src/test/module/test_fdwxact contains those FDW implementations for\n> tests, and test some cases where a transaction reads/writes data on\n> various types of foreign servers.\n> * Also test_fdwxact has TAP tests that check failure cases. The test\n> FDW implementation has the ability to inject error or panic into\n> prepare or commit phase. Using it the TAP test checks if distributed\n> transactions can be committed or rolled back even in failure cases.\n> * When foreign_twophase_commit = 'required', the transaction commit\n> fails if the transaction modified data on even one server not\n> supporting prepare API. Previously, we used to ignore servers that\n> don't support any transaction API but we check them to strictly\n> require all involved foreign servers to support all transaction APIs.\n> * Transaction resolver process resolves in-doubt transactions\n> automatically.\n> * Incorporated comments from Muhammad Usama.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHi Sawada,\n\nI have just done some review and testing of the patches and have\na couple of comments.\n\n1- IMHO the PREPARE TRANSACTION should always use 2PC even\nwhen the transaction has operated on a single foreign server regardless\nof foreign_twophase_commit setting, and throw an error otherwise when\n2PC is not available on any of the data-modified servers.\n\nFor example, consider the case\n\nBEGIN;\nINSERT INTO ft_2pc_1 VALUES(1);\nPREPARE TRANSACTION 'global_x1';\n\nHere since we are preparing the local transaction so we should also prepare\nthe transaction on the foreign server even if the transaction has modified\nonly\none foreign table.\n\nWhat do you think?\n\nAlso without this change, the above test case produces an assertion failure\nwith your patches.\n\n2- when deciding if the two-phase commit is required or not in\nFOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n2PC when we have at least one server capable of doing that.\n\ni.e\n\nFor FOREIGN_TWOPHASE_COMMIT_PREFER case in\ncheckForeignTwophaseCommitRequired() function I think\nthe condition should be\n\nneed_twophase_commit = (nserverstwophase >= 1);\ninstead of\nneed_twophase_commit = (nserverstwophase >= 2);\n\nI am attaching a patch that I have generated on top of your V20\npatches with these two modifications along with the related test case.\n\n\n\nBest regards!\n-- \n...\nMuhammad Usama\nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC",
"msg_date": "Thu, 14 May 2020 23:08:32 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n> Hi Sawada,\n>\n> I have just done some review and testing of the patches and have\n> a couple of comments.\n\nThank you for reviewing!\n\n>\n> 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n> when the transaction has operated on a single foreign server regardless\n> of foreign_twophase_commit setting, and throw an error otherwise when\n> 2PC is not available on any of the data-modified servers.\n>\n> For example, consider the case\n>\n> BEGIN;\n> INSERT INTO ft_2pc_1 VALUES(1);\n> PREPARE TRANSACTION 'global_x1';\n>\n> Here since we are preparing the local transaction so we should also prepare\n> the transaction on the foreign server even if the transaction has modified only\n> one foreign table.\n>\n> What do you think?\n\nGood catch and I agree with you. The transaction should fail if it\nopened a transaction on a 2pc-no-support server regardless of\nforeign_twophase_commit. And I think we should prepare a transaction\non a foreign server even if it didn't modify any data on that.\n\n>\n> Also without this change, the above test case produces an assertion failure\n> with your patches.\n>\n> 2- when deciding if the two-phase commit is required or not in\n> FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n> 2PC when we have at least one server capable of doing that.\n>\n> i.e\n>\n> For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n> checkForeignTwophaseCommitRequired() function I think\n> the condition should be\n>\n> need_twophase_commit = (nserverstwophase >= 1);\n> instead of\n> need_twophase_commit = (nserverstwophase >= 2);\n>\n\nHmm I might be missing your point but it seems to me that you want to\nuse two-phase commit even in the case where a transaction modified\ndata on only one server. Can't we commit distributed transaction\natomically even using one-phase commit in that case?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 15 May 2020 11:19:29 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n> >\n> >\n> > Hi Sawada,\n> >\n> > I have just done some review and testing of the patches and have\n> > a couple of comments.\n>\n> Thank you for reviewing!\n>\n> >\n> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n> > when the transaction has operated on a single foreign server regardless\n> > of foreign_twophase_commit setting, and throw an error otherwise when\n> > 2PC is not available on any of the data-modified servers.\n> >\n> > For example, consider the case\n> >\n> > BEGIN;\n> > INSERT INTO ft_2pc_1 VALUES(1);\n> > PREPARE TRANSACTION 'global_x1';\n> >\n> > Here since we are preparing the local transaction so we should also\n> prepare\n> > the transaction on the foreign server even if the transaction has\n> modified only\n> > one foreign table.\n> >\n> > What do you think?\n>\n> Good catch and I agree with you. The transaction should fail if it\n> opened a transaction on a 2pc-no-support server regardless of\n> foreign_twophase_commit. And I think we should prepare a transaction\n> on a foreign server even if it didn't modify any data on that.\n>\n> >\n> > Also without this change, the above test case produces an assertion\n> failure\n> > with your patches.\n> >\n> > 2- when deciding if the two-phase commit is required or not in\n> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n> > 2PC when we have at least one server capable of doing that.\n> >\n> > i.e\n> >\n> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n> > checkForeignTwophaseCommitRequired() function I think\n> > the condition should be\n> >\n> > need_twophase_commit = (nserverstwophase >= 1);\n> > instead of\n> > need_twophase_commit = (nserverstwophase >= 2);\n> >\n>\n> Hmm I might be missing your point but it seems to me that you want to\n> use two-phase commit even in the case where a transaction modified\n> data on only one server. Can't we commit distributed transaction\n> atomically even using one-phase commit in that case?\n>\n>\nI think you are confusing between nserverstwophase and nserverswritten.\n\nneed_twophase_commit = (nserverstwophase >= 1) would mean\nuse two-phase commit if at least one server exists in the list that is\ncapable of doing 2PC\n\nFor the case when the transaction modified data on only one server we\nalready exits the function indicating no two-phase required\n\n if (nserverswritten <= 1)\n return false;\n\n\n\n\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\nRegards,\n...\nMuhammad Usama\nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\n\nOn Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n> Hi Sawada,\n>\n> I have just done some review and testing of the patches and have\n> a couple of comments.\n\nThank you for reviewing!\n\n>\n> 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n> when the transaction has operated on a single foreign server regardless\n> of foreign_twophase_commit setting, and throw an error otherwise when\n> 2PC is not available on any of the data-modified servers.\n>\n> For example, consider the case\n>\n> BEGIN;\n> INSERT INTO ft_2pc_1 VALUES(1);\n> PREPARE TRANSACTION 'global_x1';\n>\n> Here since we are preparing the local transaction so we should also prepare\n> the transaction on the foreign server even if the transaction has modified only\n> one foreign table.\n>\n> What do you think?\n\nGood catch and I agree with you. The transaction should fail if it\nopened a transaction on a 2pc-no-support server regardless of\nforeign_twophase_commit. And I think we should prepare a transaction\non a foreign server even if it didn't modify any data on that.\n\n>\n> Also without this change, the above test case produces an assertion failure\n> with your patches.\n>\n> 2- when deciding if the two-phase commit is required or not in\n> FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n> 2PC when we have at least one server capable of doing that.\n>\n> i.e\n>\n> For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n> checkForeignTwophaseCommitRequired() function I think\n> the condition should be\n>\n> need_twophase_commit = (nserverstwophase >= 1);\n> instead of\n> need_twophase_commit = (nserverstwophase >= 2);\n>\n\nHmm I might be missing your point but it seems to me that you want to\nuse two-phase commit even in the case where a transaction modified\ndata on only one server. Can't we commit distributed transaction\natomically even using one-phase commit in that case?\n I think you are confusing between nserverstwophase and nserverswritten.need_twophase_commit = (nserverstwophase >= 1) would meanuse two-phase commit if at least one server exists in the list that iscapable of doing 2PCFor the case when the transaction modified data on only one server wealready exits the function indicating no two-phase required if (nserverswritten <= 1) return false; \nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nRegards,...Muhammad UsamaHighgo Software (Canada/China/Pakistan) URL : http://www.highgo.ca ADDR: 10318 WHALLEY BLVD, Surrey, BC",
"msg_date": "Fri, 15 May 2020 09:25:50 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 15 May 2020 at 13:26, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n>\n> On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >\n>> >\n>> > Hi Sawada,\n>> >\n>> > I have just done some review and testing of the patches and have\n>> > a couple of comments.\n>>\n>> Thank you for reviewing!\n>>\n>> >\n>> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n>> > when the transaction has operated on a single foreign server regardless\n>> > of foreign_twophase_commit setting, and throw an error otherwise when\n>> > 2PC is not available on any of the data-modified servers.\n>> >\n>> > For example, consider the case\n>> >\n>> > BEGIN;\n>> > INSERT INTO ft_2pc_1 VALUES(1);\n>> > PREPARE TRANSACTION 'global_x1';\n>> >\n>> > Here since we are preparing the local transaction so we should also prepare\n>> > the transaction on the foreign server even if the transaction has modified only\n>> > one foreign table.\n>> >\n>> > What do you think?\n>>\n>> Good catch and I agree with you. The transaction should fail if it\n>> opened a transaction on a 2pc-no-support server regardless of\n>> foreign_twophase_commit. And I think we should prepare a transaction\n>> on a foreign server even if it didn't modify any data on that.\n>>\n>> >\n>> > Also without this change, the above test case produces an assertion failure\n>> > with your patches.\n>> >\n>> > 2- when deciding if the two-phase commit is required or not in\n>> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n>> > 2PC when we have at least one server capable of doing that.\n>> >\n>> > i.e\n>> >\n>> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n>> > checkForeignTwophaseCommitRequired() function I think\n>> > the condition should be\n>> >\n>> > need_twophase_commit = (nserverstwophase >= 1);\n>> > instead of\n>> > need_twophase_commit = (nserverstwophase >= 2);\n>> >\n>>\n>> Hmm I might be missing your point but it seems to me that you want to\n>> use two-phase commit even in the case where a transaction modified\n>> data on only one server. Can't we commit distributed transaction\n>> atomically even using one-phase commit in that case?\n>>\n>\n> I think you are confusing between nserverstwophase and nserverswritten.\n>\n> need_twophase_commit = (nserverstwophase >= 1) would mean\n> use two-phase commit if at least one server exists in the list that is\n> capable of doing 2PC\n>\n> For the case when the transaction modified data on only one server we\n> already exits the function indicating no two-phase required\n>\n> if (nserverswritten <= 1)\n> return false;\n>\n\nThank you for your explanation. If the transaction modified two\nservers that don't' support 2pc and one server that supports 2pc I\nthink we don't want to use 2pc even in 'prefer' case. Because even if\nwe use 2pc in that case, it's still possible to have the atomic commit\nproblem. For example, if we failed to commit a transaction after\ncommitting other transactions on the server that doesn't support 2pc\nwe cannot rollback the already-committed transaction.\n\nOn the other hand, in 'prefer' case, if the transaction also modified\nthe local data, we need to use 2pc even if it modified data on only\none foreign server that supports 2pc. But the current code doesn't\nwork fine in that case for now. Probably we also need the following\nchange:\n\n@@ -540,7 +540,10 @@ checkForeignTwophaseCommitRequired(void)\n\n /* Did we modify the local non-temporary data? */\n if ((MyXactFlags & XACT_FLAGS_WROTENONTEMPREL) != 0)\n+ {\n nserverswritten++;\n+ nserverstwophase++;\n+ }\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 15 May 2020 13:58:59 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, May 15, 2020 at 9:59 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Fri, 15 May 2020 at 13:26, Muhammad Usama <m.usama@gmail.com> wrote:\n> >\n> >\n> >\n> > On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <\n> masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n> >> >\n> >> >\n> >> > Hi Sawada,\n> >> >\n> >> > I have just done some review and testing of the patches and have\n> >> > a couple of comments.\n> >>\n> >> Thank you for reviewing!\n> >>\n> >> >\n> >> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n> >> > when the transaction has operated on a single foreign server\n> regardless\n> >> > of foreign_twophase_commit setting, and throw an error otherwise when\n> >> > 2PC is not available on any of the data-modified servers.\n> >> >\n> >> > For example, consider the case\n> >> >\n> >> > BEGIN;\n> >> > INSERT INTO ft_2pc_1 VALUES(1);\n> >> > PREPARE TRANSACTION 'global_x1';\n> >> >\n> >> > Here since we are preparing the local transaction so we should also\n> prepare\n> >> > the transaction on the foreign server even if the transaction has\n> modified only\n> >> > one foreign table.\n> >> >\n> >> > What do you think?\n> >>\n> >> Good catch and I agree with you. The transaction should fail if it\n> >> opened a transaction on a 2pc-no-support server regardless of\n> >> foreign_twophase_commit. And I think we should prepare a transaction\n> >> on a foreign server even if it didn't modify any data on that.\n> >>\n> >> >\n> >> > Also without this change, the above test case produces an assertion\n> failure\n> >> > with your patches.\n> >> >\n> >> > 2- when deciding if the two-phase commit is required or not in\n> >> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n> >> > 2PC when we have at least one server capable of doing that.\n> >> >\n> >> > i.e\n> >> >\n> >> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n> >> > checkForeignTwophaseCommitRequired() function I think\n> >> > the condition should be\n> >> >\n> >> > need_twophase_commit = (nserverstwophase >= 1);\n> >> > instead of\n> >> > need_twophase_commit = (nserverstwophase >= 2);\n> >> >\n> >>\n> >> Hmm I might be missing your point but it seems to me that you want to\n> >> use two-phase commit even in the case where a transaction modified\n> >> data on only one server. Can't we commit distributed transaction\n> >> atomically even using one-phase commit in that case?\n> >>\n> >\n> > I think you are confusing between nserverstwophase and nserverswritten.\n> >\n> > need_twophase_commit = (nserverstwophase >= 1) would mean\n> > use two-phase commit if at least one server exists in the list that is\n> > capable of doing 2PC\n> >\n> > For the case when the transaction modified data on only one server we\n> > already exits the function indicating no two-phase required\n> >\n> > if (nserverswritten <= 1)\n> > return false;\n> >\n>\n> Thank you for your explanation. If the transaction modified two\n> servers that don't' support 2pc and one server that supports 2pc I\n> think we don't want to use 2pc even in 'prefer' case. Because even if\n> we use 2pc in that case, it's still possible to have the atomic commit\n> problem. For example, if we failed to commit a transaction after\n> committing other transactions on the server that doesn't support 2pc\n> we cannot rollback the already-committed transaction.\n>\n\nYes, that is true, And I think the 'prefer' mode will always have a corner\ncase\nno matter what. But the thing is we can reduce the probability of hitting\nan atomic commit problem by ensuring to use 2PC whenever possible.\n\nFor instance as in your example scenario where a transaction modified\ntwo servers that don't support 2PC and one server that supports it. let us\nanalyze both scenarios.\n\nIf we use 2PC on the server that supports it then the probability of hitting\na problem would be 1/3 = 0.33. because there is only one corner case\nscenario in that case. which would be if we fail to commit the third server\nAs the first server (2PC supported one) would be using prepared\ntransactions so no problem there. The second server (NON-2PC support)\nif failed to commit then, still no problem as we can rollback the prepared\ntransaction on the first server. The only issue would happen when we fail\nto commit on the third server because we have already committed\non the second server and there is no way to undo that.\n\n\nNow consider the other possibility if we do not use the 2PC in that\ncase (as you mentioned), then the probability of hitting the problem\nwould be 2/3 = 0.66. because now commit failure on either second or\nthird server will land us in an atomic-commit-problem.\n\nSo, INMO using the 2PC whenever available with 'prefer' mode\nshould be the way to go.\n\n\n> On the other hand, in 'prefer' case, if the transaction also modified\n> the local data, we need to use 2pc even if it modified data on only\n> one foreign server that supports 2pc. But the current code doesn't\n> work fine in that case for now. Probably we also need the following\n> change:\n>\n> @@ -540,7 +540,10 @@ checkForeignTwophaseCommitRequired(void)\n>\n> /* Did we modify the local non-temporary data? */\n> if ((MyXactFlags & XACT_FLAGS_WROTENONTEMPREL) != 0)\n> + {\n> nserverswritten++;\n> + nserverstwophase++;\n> + }\n>\n>\nI agree with the part that if the transaction also modifies the local data\nthen the 2PC should be used.\nThough the change you suggested [+ nserverstwophase++;]\nwould server the purpose and deliver the same results but I think a\nbetter way would be to change need_twophase_commit condition for\nprefer mode.\n\n\n * In 'prefer' case, we prepare transactions on only servers that\n * capable of two-phase commit.\n */\n- need_twophase_commit = (nserverstwophase >= 2);\n+ need_twophase_commit = (nserverstwophase >= 1);\n }\n\n\nThe reason I am saying that is. Currently, we do not use 2PC on the local\nserver\nin case of distributed transactions, so we should also not count the local\nserver\nas one (servers that would be performing the 2PC).\nAlso I feel the change need_twophase_commit = (nserverstwophase >= 1)\nlooks more in line with the definition of our 'prefer' mode algorithm.\n\nDo you see an issue with this change?\n\n\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\nRegards,\n...\nMuhammad Usama\nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\n\nOn Fri, May 15, 2020 at 9:59 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Fri, 15 May 2020 at 13:26, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n>\n> On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >\n>> >\n>> > Hi Sawada,\n>> >\n>> > I have just done some review and testing of the patches and have\n>> > a couple of comments.\n>>\n>> Thank you for reviewing!\n>>\n>> >\n>> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n>> > when the transaction has operated on a single foreign server regardless\n>> > of foreign_twophase_commit setting, and throw an error otherwise when\n>> > 2PC is not available on any of the data-modified servers.\n>> >\n>> > For example, consider the case\n>> >\n>> > BEGIN;\n>> > INSERT INTO ft_2pc_1 VALUES(1);\n>> > PREPARE TRANSACTION 'global_x1';\n>> >\n>> > Here since we are preparing the local transaction so we should also prepare\n>> > the transaction on the foreign server even if the transaction has modified only\n>> > one foreign table.\n>> >\n>> > What do you think?\n>>\n>> Good catch and I agree with you. The transaction should fail if it\n>> opened a transaction on a 2pc-no-support server regardless of\n>> foreign_twophase_commit. And I think we should prepare a transaction\n>> on a foreign server even if it didn't modify any data on that.\n>>\n>> >\n>> > Also without this change, the above test case produces an assertion failure\n>> > with your patches.\n>> >\n>> > 2- when deciding if the two-phase commit is required or not in\n>> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n>> > 2PC when we have at least one server capable of doing that.\n>> >\n>> > i.e\n>> >\n>> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n>> > checkForeignTwophaseCommitRequired() function I think\n>> > the condition should be\n>> >\n>> > need_twophase_commit = (nserverstwophase >= 1);\n>> > instead of\n>> > need_twophase_commit = (nserverstwophase >= 2);\n>> >\n>>\n>> Hmm I might be missing your point but it seems to me that you want to\n>> use two-phase commit even in the case where a transaction modified\n>> data on only one server. Can't we commit distributed transaction\n>> atomically even using one-phase commit in that case?\n>>\n>\n> I think you are confusing between nserverstwophase and nserverswritten.\n>\n> need_twophase_commit = (nserverstwophase >= 1) would mean\n> use two-phase commit if at least one server exists in the list that is\n> capable of doing 2PC\n>\n> For the case when the transaction modified data on only one server we\n> already exits the function indicating no two-phase required\n>\n> if (nserverswritten <= 1)\n> return false;\n>\n\nThank you for your explanation. If the transaction modified two\nservers that don't' support 2pc and one server that supports 2pc I\nthink we don't want to use 2pc even in 'prefer' case. Because even if\nwe use 2pc in that case, it's still possible to have the atomic commit\nproblem. For example, if we failed to commit a transaction after\ncommitting other transactions on the server that doesn't support 2pc\nwe cannot rollback the already-committed transaction.Yes, that is true, And I think the 'prefer' mode will always have a corner caseno matter what. But the thing is we can reduce the probability of hittingan atomic commit problem by ensuring to use 2PC whenever possible.For instance as in your example scenario where a transaction modifiedtwo servers that don't support 2PC and one server that supports it. let usanalyze both scenarios.If we use 2PC on the server that supports it then the probability of hittinga problem would be 1/3 = 0.33. because there is only one corner casescenario in that case. which would be if we fail to commit the third serverAs the first server (2PC supported one) would be using preparedtransactions so no problem there. The second server (NON-2PC support)if failed to commit then, still no problem as we can rollback the preparedtransaction on the first server. The only issue would happen when we failto commit on the third server because we have already committedon the second server and there is no way to undo that.Now consider the other possibility if we do not use the 2PC in thatcase (as you mentioned), then the probability of hitting the problemwould be 2/3 = 0.66. because now commit failure on either second orthird server will land us in an atomic-commit-problem.So, INMO using the 2PC whenever available with 'prefer' modeshould be the way to go.\n\nOn the other hand, in 'prefer' case, if the transaction also modified\nthe local data, we need to use 2pc even if it modified data on only\none foreign server that supports 2pc. But the current code doesn't\nwork fine in that case for now. Probably we also need the following\nchange:\n\n@@ -540,7 +540,10 @@ checkForeignTwophaseCommitRequired(void)\n\n /* Did we modify the local non-temporary data? */\n if ((MyXactFlags & XACT_FLAGS_WROTENONTEMPREL) != 0)\n+ {\n nserverswritten++;\n+ nserverstwophase++;\n+ }\nI agree with the part that if the transaction also modifies the local datathen the 2PC should be used.Though the change you suggested [+ nserverstwophase++;]would server the purpose and deliver the same results but I think abetter way would be to change need_twophase_commit condition forprefer mode. * In 'prefer' case, we prepare transactions on only servers that * capable of two-phase commit. */- need_twophase_commit = (nserverstwophase >= 2);+ need_twophase_commit = (nserverstwophase >= 1); }The reason I am saying that is. Currently, we do not use 2PC on the local serverin case of distributed transactions, so we should also not count the local serveras one (servers that would be performing the 2PC).Also I feel the change need_twophase_commit = (nserverstwophase >= 1)looks more in line with the definition of our 'prefer' mode algorithm.Do you see an issue with this change? \nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nRegards,...Muhammad UsamaHighgo Software (Canada/China/Pakistan) URL : http://www.highgo.ca ADDR: 10318 WHALLEY BLVD, Surrey, BC",
"msg_date": "Fri, 15 May 2020 15:05:52 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 15 May 2020 at 19:06, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n>\n> On Fri, May 15, 2020 at 9:59 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 15 May 2020 at 13:26, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >\n>> >\n>> >\n>> > On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>> >>\n>> >> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >> >\n>> >> >\n>> >> > Hi Sawada,\n>> >> >\n>> >> > I have just done some review and testing of the patches and have\n>> >> > a couple of comments.\n>> >>\n>> >> Thank you for reviewing!\n>> >>\n>> >> >\n>> >> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n>> >> > when the transaction has operated on a single foreign server regardless\n>> >> > of foreign_twophase_commit setting, and throw an error otherwise when\n>> >> > 2PC is not available on any of the data-modified servers.\n>> >> >\n>> >> > For example, consider the case\n>> >> >\n>> >> > BEGIN;\n>> >> > INSERT INTO ft_2pc_1 VALUES(1);\n>> >> > PREPARE TRANSACTION 'global_x1';\n>> >> >\n>> >> > Here since we are preparing the local transaction so we should also prepare\n>> >> > the transaction on the foreign server even if the transaction has modified only\n>> >> > one foreign table.\n>> >> >\n>> >> > What do you think?\n>> >>\n>> >> Good catch and I agree with you. The transaction should fail if it\n>> >> opened a transaction on a 2pc-no-support server regardless of\n>> >> foreign_twophase_commit. And I think we should prepare a transaction\n>> >> on a foreign server even if it didn't modify any data on that.\n>> >>\n>> >> >\n>> >> > Also without this change, the above test case produces an assertion failure\n>> >> > with your patches.\n>> >> >\n>> >> > 2- when deciding if the two-phase commit is required or not in\n>> >> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n>> >> > 2PC when we have at least one server capable of doing that.\n>> >> >\n>> >> > i.e\n>> >> >\n>> >> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n>> >> > checkForeignTwophaseCommitRequired() function I think\n>> >> > the condition should be\n>> >> >\n>> >> > need_twophase_commit = (nserverstwophase >= 1);\n>> >> > instead of\n>> >> > need_twophase_commit = (nserverstwophase >= 2);\n>> >> >\n>> >>\n>> >> Hmm I might be missing your point but it seems to me that you want to\n>> >> use two-phase commit even in the case where a transaction modified\n>> >> data on only one server. Can't we commit distributed transaction\n>> >> atomically even using one-phase commit in that case?\n>> >>\n>> >\n>> > I think you are confusing between nserverstwophase and nserverswritten.\n>> >\n>> > need_twophase_commit = (nserverstwophase >= 1) would mean\n>> > use two-phase commit if at least one server exists in the list that is\n>> > capable of doing 2PC\n>> >\n>> > For the case when the transaction modified data on only one server we\n>> > already exits the function indicating no two-phase required\n>> >\n>> > if (nserverswritten <= 1)\n>> > return false;\n>> >\n>>\n>> Thank you for your explanation. If the transaction modified two\n>> servers that don't' support 2pc and one server that supports 2pc I\n>> think we don't want to use 2pc even in 'prefer' case. Because even if\n>> we use 2pc in that case, it's still possible to have the atomic commit\n>> problem. For example, if we failed to commit a transaction after\n>> committing other transactions on the server that doesn't support 2pc\n>> we cannot rollback the already-committed transaction.\n>\n>\n> Yes, that is true, And I think the 'prefer' mode will always have a corner case\n> no matter what. But the thing is we can reduce the probability of hitting\n> an atomic commit problem by ensuring to use 2PC whenever possible.\n>\n> For instance as in your example scenario where a transaction modified\n> two servers that don't support 2PC and one server that supports it. let us\n> analyze both scenarios.\n>\n> If we use 2PC on the server that supports it then the probability of hitting\n> a problem would be 1/3 = 0.33. because there is only one corner case\n> scenario in that case. which would be if we fail to commit the third server\n> As the first server (2PC supported one) would be using prepared\n> transactions so no problem there. The second server (NON-2PC support)\n> if failed to commit then, still no problem as we can rollback the prepared\n> transaction on the first server. The only issue would happen when we fail\n> to commit on the third server because we have already committed\n> on the second server and there is no way to undo that.\n>\n>\n> Now consider the other possibility if we do not use the 2PC in that\n> case (as you mentioned), then the probability of hitting the problem\n> would be 2/3 = 0.66. because now commit failure on either second or\n> third server will land us in an atomic-commit-problem.\n>\n> So, INMO using the 2PC whenever available with 'prefer' mode\n> should be the way to go.\n\nMy understanding of 'prefer' mode is that even if a distributed\ntransaction modified data on several types of server we can ensure to\nkeep data consistent among only the local server and foreign servers\nthat support 2pc. It doesn't ensure anything for other servers that\ndon't support 2pc. Therefore we use 2pc if the transaction modifies\ndata on two or more servers that either the local node or servers that\nsupport 2pc.\n\nI understand your argument that using 2pc in that case the possibility\nof hitting a problem can decrease but one point we need to consider is\n2pc is very high cost. I think basically most users don’t want to use\n2pc as much as possible. Please note that it might not work as the\nuser expected because users cannot specify the commit order and\nparticular servers might be unstable. I'm not sure that users want to\npay high costs under such conditions. If we want to decrease that\npossibility by using 2pc as much as possible, I think it can be yet\nanother mode so that the user can choose the trade-off.\n\n>\n>>\n>> On the other hand, in 'prefer' case, if the transaction also modified\n>> the local data, we need to use 2pc even if it modified data on only\n>> one foreign server that supports 2pc. But the current code doesn't\n>> work fine in that case for now. Probably we also need the following\n>> change:\n>>\n>> @@ -540,7 +540,10 @@ checkForeignTwophaseCommitRequired(void)\n>>\n>> /* Did we modify the local non-temporary data? */\n>> if ((MyXactFlags & XACT_FLAGS_WROTENONTEMPREL) != 0)\n>> + {\n>> nserverswritten++;\n>> + nserverstwophase++;\n>> + }\n>>\n>\n> I agree with the part that if the transaction also modifies the local data\n> then the 2PC should be used.\n> Though the change you suggested [+ nserverstwophase++;]\n> would server the purpose and deliver the same results but I think a\n> better way would be to change need_twophase_commit condition for\n> prefer mode.\n>\n>\n> * In 'prefer' case, we prepare transactions on only servers that\n> * capable of two-phase commit.\n> */\n> - need_twophase_commit = (nserverstwophase >= 2);\n> + need_twophase_commit = (nserverstwophase >= 1);\n> }\n>\n>\n> The reason I am saying that is. Currently, we do not use 2PC on the local server\n> in case of distributed transactions, so we should also not count the local server\n> as one (servers that would be performing the 2PC).\n> Also I feel the change need_twophase_commit = (nserverstwophase >= 1)\n> looks more in line with the definition of our 'prefer' mode algorithm.\n>\n> Do you see an issue with this change?\n\nI think that with my change we will use 2pc in the case where a\ntransaction modified data on the local node and one server that\nsupports 2pc. But with your change, we will use 2pc in more cases, in\naddition to the case where a transaction modifies the local and one\n2pc-support server. This would fit the definition of 'prefer' you\ndescribed but it's still unclear to me that it's better to make\n'prefer' mode behave so if we have three values: 'required', 'prefer'\nand 'disabled'.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 15 May 2020 23:51:55 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, May 15, 2020 at 7:52 PM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Fri, 15 May 2020 at 19:06, Muhammad Usama <m.usama@gmail.com> wrote:\n> >\n> >\n> >\n> > On Fri, May 15, 2020 at 9:59 AM Masahiko Sawada <\n> masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Fri, 15 May 2020 at 13:26, Muhammad Usama <m.usama@gmail.com> wrote:\n> >> >\n> >> >\n> >> >\n> >> > On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <\n> masahiko.sawada@2ndquadrant.com> wrote:\n> >> >>\n> >> >> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com>\n> wrote:\n> >> >> >\n> >> >> >\n> >> >> > Hi Sawada,\n> >> >> >\n> >> >> > I have just done some review and testing of the patches and have\n> >> >> > a couple of comments.\n> >> >>\n> >> >> Thank you for reviewing!\n> >> >>\n> >> >> >\n> >> >> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n> >> >> > when the transaction has operated on a single foreign server\n> regardless\n> >> >> > of foreign_twophase_commit setting, and throw an error otherwise\n> when\n> >> >> > 2PC is not available on any of the data-modified servers.\n> >> >> >\n> >> >> > For example, consider the case\n> >> >> >\n> >> >> > BEGIN;\n> >> >> > INSERT INTO ft_2pc_1 VALUES(1);\n> >> >> > PREPARE TRANSACTION 'global_x1';\n> >> >> >\n> >> >> > Here since we are preparing the local transaction so we should\n> also prepare\n> >> >> > the transaction on the foreign server even if the transaction has\n> modified only\n> >> >> > one foreign table.\n> >> >> >\n> >> >> > What do you think?\n> >> >>\n> >> >> Good catch and I agree with you. The transaction should fail if it\n> >> >> opened a transaction on a 2pc-no-support server regardless of\n> >> >> foreign_twophase_commit. And I think we should prepare a transaction\n> >> >> on a foreign server even if it didn't modify any data on that.\n> >> >>\n> >> >> >\n> >> >> > Also without this change, the above test case produces an\n> assertion failure\n> >> >> > with your patches.\n> >> >> >\n> >> >> > 2- when deciding if the two-phase commit is required or not in\n> >> >> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n> >> >> > 2PC when we have at least one server capable of doing that.\n> >> >> >\n> >> >> > i.e\n> >> >> >\n> >> >> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n> >> >> > checkForeignTwophaseCommitRequired() function I think\n> >> >> > the condition should be\n> >> >> >\n> >> >> > need_twophase_commit = (nserverstwophase >= 1);\n> >> >> > instead of\n> >> >> > need_twophase_commit = (nserverstwophase >= 2);\n> >> >> >\n> >> >>\n> >> >> Hmm I might be missing your point but it seems to me that you want to\n> >> >> use two-phase commit even in the case where a transaction modified\n> >> >> data on only one server. Can't we commit distributed transaction\n> >> >> atomically even using one-phase commit in that case?\n> >> >>\n> >> >\n> >> > I think you are confusing between nserverstwophase and\n> nserverswritten.\n> >> >\n> >> > need_twophase_commit = (nserverstwophase >= 1) would mean\n> >> > use two-phase commit if at least one server exists in the list that is\n> >> > capable of doing 2PC\n> >> >\n> >> > For the case when the transaction modified data on only one server we\n> >> > already exits the function indicating no two-phase required\n> >> >\n> >> > if (nserverswritten <= 1)\n> >> > return false;\n> >> >\n> >>\n> >> Thank you for your explanation. If the transaction modified two\n> >> servers that don't' support 2pc and one server that supports 2pc I\n> >> think we don't want to use 2pc even in 'prefer' case. Because even if\n> >> we use 2pc in that case, it's still possible to have the atomic commit\n> >> problem. For example, if we failed to commit a transaction after\n> >> committing other transactions on the server that doesn't support 2pc\n> >> we cannot rollback the already-committed transaction.\n> >\n> >\n> > Yes, that is true, And I think the 'prefer' mode will always have a\n> corner case\n> > no matter what. But the thing is we can reduce the probability of hitting\n> > an atomic commit problem by ensuring to use 2PC whenever possible.\n> >\n> > For instance as in your example scenario where a transaction modified\n> > two servers that don't support 2PC and one server that supports it. let\n> us\n> > analyze both scenarios.\n> >\n> > If we use 2PC on the server that supports it then the probability of\n> hitting\n> > a problem would be 1/3 = 0.33. because there is only one corner case\n> > scenario in that case. which would be if we fail to commit the third\n> server\n> > As the first server (2PC supported one) would be using prepared\n> > transactions so no problem there. The second server (NON-2PC support)\n> > if failed to commit then, still no problem as we can rollback the\n> prepared\n> > transaction on the first server. The only issue would happen when we fail\n> > to commit on the third server because we have already committed\n> > on the second server and there is no way to undo that.\n> >\n> >\n> > Now consider the other possibility if we do not use the 2PC in that\n> > case (as you mentioned), then the probability of hitting the problem\n> > would be 2/3 = 0.66. because now commit failure on either second or\n> > third server will land us in an atomic-commit-problem.\n> >\n> > So, INMO using the 2PC whenever available with 'prefer' mode\n> > should be the way to go.\n>\n> My understanding of 'prefer' mode is that even if a distributed\n> transaction modified data on several types of server we can ensure to\n> keep data consistent among only the local server and foreign servers\n> that support 2pc. It doesn't ensure anything for other servers that\n> don't support 2pc. Therefore we use 2pc if the transaction modifies\n> data on two or more servers that either the local node or servers that\n> support 2pc.\n>\n> I understand your argument that using 2pc in that case the possibility\n> of hitting a problem can decrease but one point we need to consider is\n> 2pc is very high cost. I think basically most users don’t want to use\n> 2pc as much as possible. Please note that it might not work as the\n> user expected because users cannot specify the commit order and\n> particular servers might be unstable. I'm not sure that users want to\n> pay high costs under such conditions. If we want to decrease that\n> possibility by using 2pc as much as possible, I think it can be yet\n> another mode so that the user can choose the trade-off.\n>\n> >\n> >>\n> >> On the other hand, in 'prefer' case, if the transaction also modified\n> >> the local data, we need to use 2pc even if it modified data on only\n> >> one foreign server that supports 2pc. But the current code doesn't\n> >> work fine in that case for now. Probably we also need the following\n> >> change:\n> >>\n> >> @@ -540,7 +540,10 @@ checkForeignTwophaseCommitRequired(void)\n> >>\n> >> /* Did we modify the local non-temporary data? */\n> >> if ((MyXactFlags & XACT_FLAGS_WROTENONTEMPREL) != 0)\n> >> + {\n> >> nserverswritten++;\n> >> + nserverstwophase++;\n> >> + }\n> >>\n> >\n> > I agree with the part that if the transaction also modifies the local\n> data\n> > then the 2PC should be used.\n> > Though the change you suggested [+ nserverstwophase++;]\n> > would server the purpose and deliver the same results but I think a\n> > better way would be to change need_twophase_commit condition for\n> > prefer mode.\n> >\n> >\n> > * In 'prefer' case, we prepare transactions on only servers that\n> > * capable of two-phase commit.\n> > */\n> > - need_twophase_commit = (nserverstwophase >= 2);\n> > + need_twophase_commit = (nserverstwophase >= 1);\n> > }\n> >\n> >\n> > The reason I am saying that is. Currently, we do not use 2PC on the\n> local server\n> > in case of distributed transactions, so we should also not count the\n> local server\n> > as one (servers that would be performing the 2PC).\n> > Also I feel the change need_twophase_commit = (nserverstwophase >= 1)\n> > looks more in line with the definition of our 'prefer' mode algorithm.\n> >\n> > Do you see an issue with this change?\n>\n> I think that with my change we will use 2pc in the case where a\n> transaction modified data on the local node and one server that\n> supports 2pc. But with your change, we will use 2pc in more cases, in\n> addition to the case where a transaction modifies the local and one\n> 2pc-support server. This would fit the definition of 'prefer' you\n> described but it's still unclear to me that it's better to make\n> 'prefer' mode behave so if we have three values: 'required', 'prefer'\n> and 'disabled'.\n>\n>\nThanks for the detailed explanation, now I have a better understanding of\nthe\nreasons why we were going for a different solution to the problem.\nYou are right my understanding of 'prefer' mode is we must use 2PC as much\nas possible, and reason for that was the world prefer as per my\nunderstanding\nmeans \"it's more desirable/better to use than another or others\"\nSo the way I understood the FOREIGN_TWOPHASE_COMMIT_PREFER\nwas that we would use 2PC in the maximum possible of cases, and the user\nwould already have the expectation that 2PC is more expensive than 1PC.\n\n\n\nRegards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\nRegards,\n...\nMuhammad Usama\nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\n\nOn Fri, May 15, 2020 at 7:52 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Fri, 15 May 2020 at 19:06, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n>\n> On Fri, May 15, 2020 at 9:59 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 15 May 2020 at 13:26, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >\n>> >\n>> >\n>> > On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>> >>\n>> >> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >> >\n>> >> >\n>> >> > Hi Sawada,\n>> >> >\n>> >> > I have just done some review and testing of the patches and have\n>> >> > a couple of comments.\n>> >>\n>> >> Thank you for reviewing!\n>> >>\n>> >> >\n>> >> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n>> >> > when the transaction has operated on a single foreign server regardless\n>> >> > of foreign_twophase_commit setting, and throw an error otherwise when\n>> >> > 2PC is not available on any of the data-modified servers.\n>> >> >\n>> >> > For example, consider the case\n>> >> >\n>> >> > BEGIN;\n>> >> > INSERT INTO ft_2pc_1 VALUES(1);\n>> >> > PREPARE TRANSACTION 'global_x1';\n>> >> >\n>> >> > Here since we are preparing the local transaction so we should also prepare\n>> >> > the transaction on the foreign server even if the transaction has modified only\n>> >> > one foreign table.\n>> >> >\n>> >> > What do you think?\n>> >>\n>> >> Good catch and I agree with you. The transaction should fail if it\n>> >> opened a transaction on a 2pc-no-support server regardless of\n>> >> foreign_twophase_commit. And I think we should prepare a transaction\n>> >> on a foreign server even if it didn't modify any data on that.\n>> >>\n>> >> >\n>> >> > Also without this change, the above test case produces an assertion failure\n>> >> > with your patches.\n>> >> >\n>> >> > 2- when deciding if the two-phase commit is required or not in\n>> >> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n>> >> > 2PC when we have at least one server capable of doing that.\n>> >> >\n>> >> > i.e\n>> >> >\n>> >> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n>> >> > checkForeignTwophaseCommitRequired() function I think\n>> >> > the condition should be\n>> >> >\n>> >> > need_twophase_commit = (nserverstwophase >= 1);\n>> >> > instead of\n>> >> > need_twophase_commit = (nserverstwophase >= 2);\n>> >> >\n>> >>\n>> >> Hmm I might be missing your point but it seems to me that you want to\n>> >> use two-phase commit even in the case where a transaction modified\n>> >> data on only one server. Can't we commit distributed transaction\n>> >> atomically even using one-phase commit in that case?\n>> >>\n>> >\n>> > I think you are confusing between nserverstwophase and nserverswritten.\n>> >\n>> > need_twophase_commit = (nserverstwophase >= 1) would mean\n>> > use two-phase commit if at least one server exists in the list that is\n>> > capable of doing 2PC\n>> >\n>> > For the case when the transaction modified data on only one server we\n>> > already exits the function indicating no two-phase required\n>> >\n>> > if (nserverswritten <= 1)\n>> > return false;\n>> >\n>>\n>> Thank you for your explanation. If the transaction modified two\n>> servers that don't' support 2pc and one server that supports 2pc I\n>> think we don't want to use 2pc even in 'prefer' case. Because even if\n>> we use 2pc in that case, it's still possible to have the atomic commit\n>> problem. For example, if we failed to commit a transaction after\n>> committing other transactions on the server that doesn't support 2pc\n>> we cannot rollback the already-committed transaction.\n>\n>\n> Yes, that is true, And I think the 'prefer' mode will always have a corner case\n> no matter what. But the thing is we can reduce the probability of hitting\n> an atomic commit problem by ensuring to use 2PC whenever possible.\n>\n> For instance as in your example scenario where a transaction modified\n> two servers that don't support 2PC and one server that supports it. let us\n> analyze both scenarios.\n>\n> If we use 2PC on the server that supports it then the probability of hitting\n> a problem would be 1/3 = 0.33. because there is only one corner case\n> scenario in that case. which would be if we fail to commit the third server\n> As the first server (2PC supported one) would be using prepared\n> transactions so no problem there. The second server (NON-2PC support)\n> if failed to commit then, still no problem as we can rollback the prepared\n> transaction on the first server. The only issue would happen when we fail\n> to commit on the third server because we have already committed\n> on the second server and there is no way to undo that.\n>\n>\n> Now consider the other possibility if we do not use the 2PC in that\n> case (as you mentioned), then the probability of hitting the problem\n> would be 2/3 = 0.66. because now commit failure on either second or\n> third server will land us in an atomic-commit-problem.\n>\n> So, INMO using the 2PC whenever available with 'prefer' mode\n> should be the way to go.\n\nMy understanding of 'prefer' mode is that even if a distributed\ntransaction modified data on several types of server we can ensure to\nkeep data consistent among only the local server and foreign servers\nthat support 2pc. It doesn't ensure anything for other servers that\ndon't support 2pc. Therefore we use 2pc if the transaction modifies\ndata on two or more servers that either the local node or servers that\nsupport 2pc.\n\nI understand your argument that using 2pc in that case the possibility\nof hitting a problem can decrease but one point we need to consider is\n2pc is very high cost. I think basically most users don’t want to use\n2pc as much as possible. Please note that it might not work as the\nuser expected because users cannot specify the commit order and\nparticular servers might be unstable. I'm not sure that users want to\npay high costs under such conditions. If we want to decrease that\npossibility by using 2pc as much as possible, I think it can be yet\nanother mode so that the user can choose the trade-off.\n\n>\n>>\n>> On the other hand, in 'prefer' case, if the transaction also modified\n>> the local data, we need to use 2pc even if it modified data on only\n>> one foreign server that supports 2pc. But the current code doesn't\n>> work fine in that case for now. Probably we also need the following\n>> change:\n>>\n>> @@ -540,7 +540,10 @@ checkForeignTwophaseCommitRequired(void)\n>>\n>> /* Did we modify the local non-temporary data? */\n>> if ((MyXactFlags & XACT_FLAGS_WROTENONTEMPREL) != 0)\n>> + {\n>> nserverswritten++;\n>> + nserverstwophase++;\n>> + }\n>>\n>\n> I agree with the part that if the transaction also modifies the local data\n> then the 2PC should be used.\n> Though the change you suggested [+ nserverstwophase++;]\n> would server the purpose and deliver the same results but I think a\n> better way would be to change need_twophase_commit condition for\n> prefer mode.\n>\n>\n> * In 'prefer' case, we prepare transactions on only servers that\n> * capable of two-phase commit.\n> */\n> - need_twophase_commit = (nserverstwophase >= 2);\n> + need_twophase_commit = (nserverstwophase >= 1);\n> }\n>\n>\n> The reason I am saying that is. Currently, we do not use 2PC on the local server\n> in case of distributed transactions, so we should also not count the local server\n> as one (servers that would be performing the 2PC).\n> Also I feel the change need_twophase_commit = (nserverstwophase >= 1)\n> looks more in line with the definition of our 'prefer' mode algorithm.\n>\n> Do you see an issue with this change?\n\nI think that with my change we will use 2pc in the case where a\ntransaction modified data on the local node and one server that\nsupports 2pc. But with your change, we will use 2pc in more cases, in\naddition to the case where a transaction modifies the local and one\n2pc-support server. This would fit the definition of 'prefer' you\ndescribed but it's still unclear to me that it's better to make\n'prefer' mode behave so if we have three values: 'required', 'prefer'\nand 'disabled'.\nThanks for the detailed explanation, now I have a better understanding of thereasons why we were going for a different solution to the problem.You are right my understanding of 'prefer' mode is we must use 2PC as muchas possible, and reason for that was the world prefer as per my understandingmeans \"it's more desirable/better to use than another or others\" So the way I understood the FOREIGN_TWOPHASE_COMMIT_PREFER\nwas that we would use 2PC in the maximum possible of cases, and the userwould already have the expectation that 2PC is more expensive than 1PC.\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nRegards,...Muhammad UsamaHighgo Software (Canada/China/Pakistan) URL : http://www.highgo.ca ADDR: 10318 WHALLEY BLVD, Surrey, BC",
"msg_date": "Fri, 15 May 2020 20:54:03 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sat, 16 May 2020 at 00:54, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n>\n> On Fri, May 15, 2020 at 7:52 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 15 May 2020 at 19:06, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >\n>> >\n>> >\n>> > On Fri, May 15, 2020 at 9:59 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>> >>\n>> >> On Fri, 15 May 2020 at 13:26, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >> >\n>> >> >\n>> >> >\n>> >> > On Fri, May 15, 2020 at 7:20 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>> >> >>\n>> >> >> On Fri, 15 May 2020 at 03:08, Muhammad Usama <m.usama@gmail.com> wrote:\n>> >> >> >\n>> >> >> >\n>> >> >> > Hi Sawada,\n>> >> >> >\n>> >> >> > I have just done some review and testing of the patches and have\n>> >> >> > a couple of comments.\n>> >> >>\n>> >> >> Thank you for reviewing!\n>> >> >>\n>> >> >> >\n>> >> >> > 1- IMHO the PREPARE TRANSACTION should always use 2PC even\n>> >> >> > when the transaction has operated on a single foreign server regardless\n>> >> >> > of foreign_twophase_commit setting, and throw an error otherwise when\n>> >> >> > 2PC is not available on any of the data-modified servers.\n>> >> >> >\n>> >> >> > For example, consider the case\n>> >> >> >\n>> >> >> > BEGIN;\n>> >> >> > INSERT INTO ft_2pc_1 VALUES(1);\n>> >> >> > PREPARE TRANSACTION 'global_x1';\n>> >> >> >\n>> >> >> > Here since we are preparing the local transaction so we should also prepare\n>> >> >> > the transaction on the foreign server even if the transaction has modified only\n>> >> >> > one foreign table.\n>> >> >> >\n>> >> >> > What do you think?\n>> >> >>\n>> >> >> Good catch and I agree with you. The transaction should fail if it\n>> >> >> opened a transaction on a 2pc-no-support server regardless of\n>> >> >> foreign_twophase_commit. And I think we should prepare a transaction\n>> >> >> on a foreign server even if it didn't modify any data on that.\n>> >> >>\n>> >> >> >\n>> >> >> > Also without this change, the above test case produces an assertion failure\n>> >> >> > with your patches.\n>> >> >> >\n>> >> >> > 2- when deciding if the two-phase commit is required or not in\n>> >> >> > FOREIGN_TWOPHASE_COMMIT_PREFER mode we should use\n>> >> >> > 2PC when we have at least one server capable of doing that.\n>> >> >> >\n>> >> >> > i.e\n>> >> >> >\n>> >> >> > For FOREIGN_TWOPHASE_COMMIT_PREFER case in\n>> >> >> > checkForeignTwophaseCommitRequired() function I think\n>> >> >> > the condition should be\n>> >> >> >\n>> >> >> > need_twophase_commit = (nserverstwophase >= 1);\n>> >> >> > instead of\n>> >> >> > need_twophase_commit = (nserverstwophase >= 2);\n>> >> >> >\n>> >> >>\n>> >> >> Hmm I might be missing your point but it seems to me that you want to\n>> >> >> use two-phase commit even in the case where a transaction modified\n>> >> >> data on only one server. Can't we commit distributed transaction\n>> >> >> atomically even using one-phase commit in that case?\n>> >> >>\n>> >> >\n>> >> > I think you are confusing between nserverstwophase and nserverswritten.\n>> >> >\n>> >> > need_twophase_commit = (nserverstwophase >= 1) would mean\n>> >> > use two-phase commit if at least one server exists in the list that is\n>> >> > capable of doing 2PC\n>> >> >\n>> >> > For the case when the transaction modified data on only one server we\n>> >> > already exits the function indicating no two-phase required\n>> >> >\n>> >> > if (nserverswritten <= 1)\n>> >> > return false;\n>> >> >\n>> >>\n>> >> Thank you for your explanation. If the transaction modified two\n>> >> servers that don't' support 2pc and one server that supports 2pc I\n>> >> think we don't want to use 2pc even in 'prefer' case. Because even if\n>> >> we use 2pc in that case, it's still possible to have the atomic commit\n>> >> problem. For example, if we failed to commit a transaction after\n>> >> committing other transactions on the server that doesn't support 2pc\n>> >> we cannot rollback the already-committed transaction.\n>> >\n>> >\n>> > Yes, that is true, And I think the 'prefer' mode will always have a corner case\n>> > no matter what. But the thing is we can reduce the probability of hitting\n>> > an atomic commit problem by ensuring to use 2PC whenever possible.\n>> >\n>> > For instance as in your example scenario where a transaction modified\n>> > two servers that don't support 2PC and one server that supports it. let us\n>> > analyze both scenarios.\n>> >\n>> > If we use 2PC on the server that supports it then the probability of hitting\n>> > a problem would be 1/3 = 0.33. because there is only one corner case\n>> > scenario in that case. which would be if we fail to commit the third server\n>> > As the first server (2PC supported one) would be using prepared\n>> > transactions so no problem there. The second server (NON-2PC support)\n>> > if failed to commit then, still no problem as we can rollback the prepared\n>> > transaction on the first server. The only issue would happen when we fail\n>> > to commit on the third server because we have already committed\n>> > on the second server and there is no way to undo that.\n>> >\n>> >\n>> > Now consider the other possibility if we do not use the 2PC in that\n>> > case (as you mentioned), then the probability of hitting the problem\n>> > would be 2/3 = 0.66. because now commit failure on either second or\n>> > third server will land us in an atomic-commit-problem.\n>> >\n>> > So, INMO using the 2PC whenever available with 'prefer' mode\n>> > should be the way to go.\n>>\n>> My understanding of 'prefer' mode is that even if a distributed\n>> transaction modified data on several types of server we can ensure to\n>> keep data consistent among only the local server and foreign servers\n>> that support 2pc. It doesn't ensure anything for other servers that\n>> don't support 2pc. Therefore we use 2pc if the transaction modifies\n>> data on two or more servers that either the local node or servers that\n>> support 2pc.\n>>\n>> I understand your argument that using 2pc in that case the possibility\n>> of hitting a problem can decrease but one point we need to consider is\n>> 2pc is very high cost. I think basically most users don’t want to use\n>> 2pc as much as possible. Please note that it might not work as the\n>> user expected because users cannot specify the commit order and\n>> particular servers might be unstable. I'm not sure that users want to\n>> pay high costs under such conditions. If we want to decrease that\n>> possibility by using 2pc as much as possible, I think it can be yet\n>> another mode so that the user can choose the trade-off.\n>>\n>> >\n>> >>\n>> >> On the other hand, in 'prefer' case, if the transaction also modified\n>> >> the local data, we need to use 2pc even if it modified data on only\n>> >> one foreign server that supports 2pc. But the current code doesn't\n>> >> work fine in that case for now. Probably we also need the following\n>> >> change:\n>> >>\n>> >> @@ -540,7 +540,10 @@ checkForeignTwophaseCommitRequired(void)\n>> >>\n>> >> /* Did we modify the local non-temporary data? */\n>> >> if ((MyXactFlags & XACT_FLAGS_WROTENONTEMPREL) != 0)\n>> >> + {\n>> >> nserverswritten++;\n>> >> + nserverstwophase++;\n>> >> + }\n>> >>\n>> >\n>> > I agree with the part that if the transaction also modifies the local data\n>> > then the 2PC should be used.\n>> > Though the change you suggested [+ nserverstwophase++;]\n>> > would server the purpose and deliver the same results but I think a\n>> > better way would be to change need_twophase_commit condition for\n>> > prefer mode.\n>> >\n>> >\n>> > * In 'prefer' case, we prepare transactions on only servers that\n>> > * capable of two-phase commit.\n>> > */\n>> > - need_twophase_commit = (nserverstwophase >= 2);\n>> > + need_twophase_commit = (nserverstwophase >= 1);\n>> > }\n>> >\n>> >\n>> > The reason I am saying that is. Currently, we do not use 2PC on the local server\n>> > in case of distributed transactions, so we should also not count the local server\n>> > as one (servers that would be performing the 2PC).\n>> > Also I feel the change need_twophase_commit = (nserverstwophase >= 1)\n>> > looks more in line with the definition of our 'prefer' mode algorithm.\n>> >\n>> > Do you see an issue with this change?\n>>\n>> I think that with my change we will use 2pc in the case where a\n>> transaction modified data on the local node and one server that\n>> supports 2pc. But with your change, we will use 2pc in more cases, in\n>> addition to the case where a transaction modifies the local and one\n>> 2pc-support server. This would fit the definition of 'prefer' you\n>> described but it's still unclear to me that it's better to make\n>> 'prefer' mode behave so if we have three values: 'required', 'prefer'\n>> and 'disabled'.\n>>\n>\n> Thanks for the detailed explanation, now I have a better understanding of the\n> reasons why we were going for a different solution to the problem.\n> You are right my understanding of 'prefer' mode is we must use 2PC as much\n> as possible, and reason for that was the world prefer as per my understanding\n> means \"it's more desirable/better to use than another or others\"\n> So the way I understood the FOREIGN_TWOPHASE_COMMIT_PREFER\n> was that we would use 2PC in the maximum possible of cases, and the user\n> would already have the expectation that 2PC is more expensive than 1PC.\n>\n\nI think that the current three values are useful for users. The\n‘required’ mode is used when users want to ensure all writes involved\nwith the transaction are committed atomically. That being said, as\nsome FDW plugin might not support the prepare API we cannot force\nusers to use this mode all the time when using atomic commit.\nTherefore ‘prefer’ mode would be useful for this case. Both modes use\n2pc only when it's required for atomic commit.\n\nSo what do you think my idea that adding the behavior you proposed as\nanother new mode? As it’s better to keep the first version simple as\nmuch as possible It might not be added to the first version but this\nbehavior might be useful in some cases.\n\nI've attached a new version patch that incorporates some bug fixes\nreported by Muhammad. Please review them.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 19 May 2020 16:02:12 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, May 19, 2020 at 12:33 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> I think that the current three values are useful for users. The\n> ‘required’ mode is used when users want to ensure all writes involved\n> with the transaction are committed atomically. That being said, as\n> some FDW plugin might not support the prepare API we cannot force\n> users to use this mode all the time when using atomic commit.\n> Therefore ‘prefer’ mode would be useful for this case. Both modes use\n> 2pc only when it's required for atomic commit.\n>\n> So what do you think my idea that adding the behavior you proposed as\n> another new mode? As it’s better to keep the first version simple as\n> much as possible\n>\n\nIf the intention is to keep the first version simple, then why do we\nwant to support any mode other than 'required'? I think it will limit\nits usage for the cases where 2PC can be used only when all FDWs\ninvolved support Prepare API but if that helps to keep the design and\npatch simpler then why not just do that for the first version and then\nextend it later. OTOH, if you think it will be really useful to keep\nother modes, then also we could try to keep those in separate patches\nto facilitate the review and discussion of the core feature.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Jun 2020 11:19:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, 3 Jun 2020 at 14:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 19, 2020 at 12:33 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I think that the current three values are useful for users. The\n> > ‘required’ mode is used when users want to ensure all writes involved\n> > with the transaction are committed atomically. That being said, as\n> > some FDW plugin might not support the prepare API we cannot force\n> > users to use this mode all the time when using atomic commit.\n> > Therefore ‘prefer’ mode would be useful for this case. Both modes use\n> > 2pc only when it's required for atomic commit.\n> >\n> > So what do you think my idea that adding the behavior you proposed as\n> > another new mode? As it’s better to keep the first version simple as\n> > much as possible\n> >\n>\n> If the intention is to keep the first version simple, then why do we\n> want to support any mode other than 'required'? I think it will limit\n> its usage for the cases where 2PC can be used only when all FDWs\n> involved support Prepare API but if that helps to keep the design and\n> patch simpler then why not just do that for the first version and then\n> extend it later. OTOH, if you think it will be really useful to keep\n> other modes, then also we could try to keep those in separate patches\n> to facilitate the review and discussion of the core feature.\n\n‘disabled’ is the fundamental mode. We also need 'disabled' mode,\notherwise existing FDW won't work. I was concerned that many FDW\nplugins don't implement FDW transaction APIs yet when users start\nusing this feature. But it seems to be a good idea to move 'prefer'\nmode to a separate patch while leaving 'required'. I'll do that in the\nnext version patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 3 Jun 2020 15:32:17 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Jun 3, 2020 at 12:02 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 3 Jun 2020 at 14:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > If the intention is to keep the first version simple, then why do we\n> > want to support any mode other than 'required'? I think it will limit\n> > its usage for the cases where 2PC can be used only when all FDWs\n> > involved support Prepare API but if that helps to keep the design and\n> > patch simpler then why not just do that for the first version and then\n> > extend it later. OTOH, if you think it will be really useful to keep\n> > other modes, then also we could try to keep those in separate patches\n> > to facilitate the review and discussion of the core feature.\n>\n> ‘disabled’ is the fundamental mode. We also need 'disabled' mode,\n> otherwise existing FDW won't work.\n>\n\nIIUC, if foreign_twophase_commit is 'disabled', we don't use a\ntwo-phase protocol to commit distributed transactions, right? So, do\nwe check this at the time of Prepare or Commit whether we need to use\na two-phase protocol? I think this should be checked at prepare time.\n\n+ <para>\n+ This parameter can be changed at any time; the behavior for any one\n+ transaction is determined by the setting in effect when it commits.\n+ </para>\n\nThis is written w.r.t foreign_twophase_commit. If one changes this\nbetween prepare and commit, will it have any impact?\n\n> I was concerned that many FDW\n> plugins don't implement FDW transaction APIs yet when users start\n> using this feature. But it seems to be a good idea to move 'prefer'\n> mode to a separate patch while leaving 'required'. I'll do that in the\n> next version patch.\n>\n\nOkay, thanks. Please, see if you can separate out the documentation\nfor that as well.\n\nFew other comments on v21-0003-Documentation-update:\n----------------------------------------------------\n1.\n+ <entry></entry>\n+ <entry>\n+ Numeric transaction identifier with that this foreign transaction\n+ associates\n+ </entry>\n\n/with that this/with which this\n\n2.\n+ <entry>\n+ The OID of the foreign server on that the foreign transaction\nis prepared\n+ </entry>\n\n/on that the/on which the\n\n3.\n+ <entry><structfield>status</structfield></entry>\n+ <entry><type>text</type></entry>\n+ <entry></entry>\n+ <entry>\n+ Status of foreign transaction. Possible values are:\n+ <itemizedlist>\n+ <listitem>\n+ <para>\n+ <literal>initial</literal> : Initial status.\n+ </para>\n\nWhat exactly \"Initial status\" means?\n\n4.\n+ <entry><structfield>in_doubt</structfield></entry>\n+ <entry><type>boolean</type></entry>\n+ <entry></entry>\n+ <entry>\n+ If <literal>true</literal> this foreign transaction is\nin-doubt status and\n+ needs to be resolved by calling <function>pg_resolve_fdwxact</function>\n+ function.\n+ </entry>\n\nIt would be better if you can add an additional sentence to say when\nand or how can foreign transactions reach in-doubt state.\n\n5.\nIf <literal>N</literal> local transactions each\n+ across <literal>K</literal> foreign server this value need to be set\n\nThis part of the sentence can be improved by saying something like:\n\"If a user expects N local transactions and each of those involves K\nforeign servers, this value..\".\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Jun 2020 09:15:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 4 Jun 2020 at 12:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 3, 2020 at 12:02 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 3 Jun 2020 at 14:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > If the intention is to keep the first version simple, then why do we\n> > > want to support any mode other than 'required'? I think it will limit\n> > > its usage for the cases where 2PC can be used only when all FDWs\n> > > involved support Prepare API but if that helps to keep the design and\n> > > patch simpler then why not just do that for the first version and then\n> > > extend it later. OTOH, if you think it will be really useful to keep\n> > > other modes, then also we could try to keep those in separate patches\n> > > to facilitate the review and discussion of the core feature.\n> >\n> > ‘disabled’ is the fundamental mode.\n\nOops, I wanted to say 'required' is the fundamental mode.\n\n> > We also need 'disabled' mode,\n> > otherwise existing FDW won't work.\n> >\n>\n> IIUC, if foreign_twophase_commit is 'disabled', we don't use a\n> two-phase protocol to commit distributed transactions, right? So, do\n> we check this at the time of Prepare or Commit whether we need to use\n> a two-phase protocol? I think this should be checked at prepare time.\n\nWhen a client executes COMMIT to a distributed transaction, 2pc is\nautomatically, transparently used. In ‘required’ case, all involved\n(and modified) foreign server needs to support 2pc. So if a\ndistributed transaction modifies data on a foreign server connected\nvia an existing FDW which doesn’t support 2pc, the transaction cannot\nproceed commit, fails at pre-commit phase. So there should be two\nmodes: ‘disabled’ and ‘required’, and should be ‘disabled’ by default.\n\n>\n> + <para>\n> + This parameter can be changed at any time; the behavior for any one\n> + transaction is determined by the setting in effect when it commits.\n> + </para>\n>\n> This is written w.r.t foreign_twophase_commit. If one changes this\n> between prepare and commit, will it have any impact?\n\nSince the distributed transaction commit automatically uses 2pc when\nexecuting COMMIT, it's not possible to change foreign_twophase_commit\nbetween prepare and commit. So I'd like to explain the case where a\nuser executes PREPARE and then COMMIT PREPARED while changing\nforeign_twophase_commit.\n\nPREPARE can run only when foreign_twophase_commit is 'required' (or\n'prefer') and all foreign servers involved with the transaction\nsupport 2pc. We prepare all foreign transactions no matter what the\nnumber of servers and modified or not. If either\nforeign_twophase_commit is 'disabled' or the transaction modifies data\non a foreign server that doesn't support 2pc, it raises an error. At\nCOMMIT (or ROLLBACK) PREPARED, similarly foreign_twophase_commit needs\nto be set to 'required'. It raises an error if the distributed\ntransaction has a foreign transaction and foreign_twophase_commit is\n'disabled'.\n\n>\n> > I was concerned that many FDW\n> > plugins don't implement FDW transaction APIs yet when users start\n> > using this feature. But it seems to be a good idea to move 'prefer'\n> > mode to a separate patch while leaving 'required'. I'll do that in the\n> > next version patch.\n> >\n>\n> Okay, thanks. Please, see if you can separate out the documentation\n> for that as well.\n>\n> Few other comments on v21-0003-Documentation-update:\n> ----------------------------------------------------\n> 1.\n> + <entry></entry>\n> + <entry>\n> + Numeric transaction identifier with that this foreign transaction\n> + associates\n> + </entry>\n>\n> /with that this/with which this\n>\n> 2.\n> + <entry>\n> + The OID of the foreign server on that the foreign transaction\n> is prepared\n> + </entry>\n>\n> /on that the/on which the\n>\n> 3.\n> + <entry><structfield>status</structfield></entry>\n> + <entry><type>text</type></entry>\n> + <entry></entry>\n> + <entry>\n> + Status of foreign transaction. Possible values are:\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <literal>initial</literal> : Initial status.\n> + </para>\n>\n> What exactly \"Initial status\" means?\n\nThis part is out-of-date. Fixed.\n\n>\n> 4.\n> + <entry><structfield>in_doubt</structfield></entry>\n> + <entry><type>boolean</type></entry>\n> + <entry></entry>\n> + <entry>\n> + If <literal>true</literal> this foreign transaction is\n> in-doubt status and\n> + needs to be resolved by calling <function>pg_resolve_fdwxact</function>\n> + function.\n> + </entry>\n>\n> It would be better if you can add an additional sentence to say when\n> and or how can foreign transactions reach in-doubt state.\n>\n> 5.\n> If <literal>N</literal> local transactions each\n> + across <literal>K</literal> foreign server this value need to be set\n>\n> This part of the sentence can be improved by saying something like:\n> \"If a user expects N local transactions and each of those involves K\n> foreign servers, this value..\".\n\nThanks. I've incorporated all your comments.\n\nI've attached the new version patch set. 0006 is a separate patch\nwhich introduces 'prefer' mode to foreign_twophase_commit.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 5 Jun 2020 18:45:33 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 5, 2020 at 3:16 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 4 Jun 2020 at 12:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > + <para>\n> > + This parameter can be changed at any time; the behavior for any one\n> > + transaction is determined by the setting in effect when it commits.\n> > + </para>\n> >\n> > This is written w.r.t foreign_twophase_commit. If one changes this\n> > between prepare and commit, will it have any impact?\n>\n> Since the distributed transaction commit automatically uses 2pc when\n> executing COMMIT, it's not possible to change foreign_twophase_commit\n> between prepare and commit. So I'd like to explain the case where a\n> user executes PREPARE and then COMMIT PREPARED while changing\n> foreign_twophase_commit.\n>\n> PREPARE can run only when foreign_twophase_commit is 'required' (or\n> 'prefer') and all foreign servers involved with the transaction\n> support 2pc. We prepare all foreign transactions no matter what the\n> number of servers and modified or not. If either\n> foreign_twophase_commit is 'disabled' or the transaction modifies data\n> on a foreign server that doesn't support 2pc, it raises an error. At\n> COMMIT (or ROLLBACK) PREPARED, similarly foreign_twophase_commit needs\n> to be set to 'required'. It raises an error if the distributed\n> transaction has a foreign transaction and foreign_twophase_commit is\n> 'disabled'.\n>\n\nSo, IIUC, it will raise an error if foreign_twophase_commit is\n'disabled' (or one of the foreign server involved doesn't support 2PC)\nand the error can be raised both when user issues PREPARE or COMMIT\n(or ROLLBACK) PREPARED. If so, isn't it strange that we raise such an\nerror after PREPARE? What kind of use-case required this?\n\n>\n> >\n> > 4.\n> > + <entry><structfield>in_doubt</structfield></entry>\n> > + <entry><type>boolean</type></entry>\n> > + <entry></entry>\n> > + <entry>\n> > + If <literal>true</literal> this foreign transaction is\n> > in-doubt status and\n> > + needs to be resolved by calling <function>pg_resolve_fdwxact</function>\n> > + function.\n> > + </entry>\n> >\n> > It would be better if you can add an additional sentence to say when\n> > and or how can foreign transactions reach in-doubt state.\n> >\n\n+ If <literal>true</literal> this foreign transaction is in-doubt status.\n+ A foreign transaction becomes in-doubt status when user canceled the\n+ query during transaction commit or the server crashed during transaction\n+ commit.\n\nCan we reword the second sentence as: \"A foreign transaction can have\nthis status when the user has cancelled the statement or the server\ncrashes during transaction commit.\"? I have another question about\nthis field, why can't it be one of the status ('preparing',\n'prepared', 'committing', 'aborting', 'in-doubt') rather than having a\nseparate field? Also, isn't it more suitable to name 'status' field\nas 'state' because these appear to be more like different states of\ntransaction?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jun 2020 18:51:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 11 Jun 2020 at 22:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 5, 2020 at 3:16 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 4 Jun 2020 at 12:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > + <para>\n> > > + This parameter can be changed at any time; the behavior for any one\n> > > + transaction is determined by the setting in effect when it commits.\n> > > + </para>\n> > >\n> > > This is written w.r.t foreign_twophase_commit. If one changes this\n> > > between prepare and commit, will it have any impact?\n> >\n> > Since the distributed transaction commit automatically uses 2pc when\n> > executing COMMIT, it's not possible to change foreign_twophase_commit\n> > between prepare and commit. So I'd like to explain the case where a\n> > user executes PREPARE and then COMMIT PREPARED while changing\n> > foreign_twophase_commit.\n> >\n> > PREPARE can run only when foreign_twophase_commit is 'required' (or\n> > 'prefer') and all foreign servers involved with the transaction\n> > support 2pc. We prepare all foreign transactions no matter what the\n> > number of servers and modified or not. If either\n> > foreign_twophase_commit is 'disabled' or the transaction modifies data\n> > on a foreign server that doesn't support 2pc, it raises an error. At\n> > COMMIT (or ROLLBACK) PREPARED, similarly foreign_twophase_commit needs\n> > to be set to 'required'. It raises an error if the distributed\n> > transaction has a foreign transaction and foreign_twophase_commit is\n> > 'disabled'.\n> >\n>\n> So, IIUC, it will raise an error if foreign_twophase_commit is\n> 'disabled' (or one of the foreign server involved doesn't support 2PC)\n> and the error can be raised both when user issues PREPARE or COMMIT\n> (or ROLLBACK) PREPARED. If so, isn't it strange that we raise such an\n> error after PREPARE? What kind of use-case required this?\n>\n\nI don’t concrete use-case but the reason why it raises an error when a\nuser setting foreign_twophase_commit to 'disabled' executes COMMIT (or\nROLLBACK) PREPARED within the transaction involving at least one\nforeign server is that I wanted to make it behaves in a similar way of\nCOMMIT case. I mean, if a user executes just COMMIT, the distributed\ntransaction is committed in two phases but the value of\nforeign_twophase_commit is not changed during these two phases. So I\nwanted to require user to set foreign_twophase_commit to ‘required’\nboth when executing PREPARE and executing COMMIT (or ROLLBACK)\nPREPARED. Implementation also can become simple because we can assume\nthat foreign_twophase_commit is always enabled when a transaction\nrequires foreign transaction preparation and resolution.\n\n> >\n> > >\n> > > 4.\n> > > + <entry><structfield>in_doubt</structfield></entry>\n> > > + <entry><type>boolean</type></entry>\n> > > + <entry></entry>\n> > > + <entry>\n> > > + If <literal>true</literal> this foreign transaction is\n> > > in-doubt status and\n> > > + needs to be resolved by calling <function>pg_resolve_fdwxact</function>\n> > > + function.\n> > > + </entry>\n> > >\n> > > It would be better if you can add an additional sentence to say when\n> > > and or how can foreign transactions reach in-doubt state.\n> > >\n>\n> + If <literal>true</literal> this foreign transaction is in-doubt status.\n> + A foreign transaction becomes in-doubt status when user canceled the\n> + query during transaction commit or the server crashed during transaction\n> + commit.\n>\n> Can we reword the second sentence as: \"A foreign transaction can have\n> this status when the user has cancelled the statement or the server\n> crashes during transaction commit.\"?\n\nAgreed. Updated in my local branch.\n\n> I have another question about\n> this field, why can't it be one of the status ('preparing',\n> 'prepared', 'committing', 'aborting', 'in-doubt') rather than having a\n> separate field?\n\nBecause I'm using in-doubt field also for checking if the foreign\ntransaction entry can also be resolved manually, i.g.\npg_resolve_foreign_xact(). For instance, a foreign transaction which\nstatus = 'prepared' and in-doubt = 'true' can be resolved either\nforeign transaction resolver or pg_resolve_foreign_xact(). When a user\nexecute pg_resolve_foreign_xact() against the foreign transaction, it\nsets status = 'committing' (or 'rollbacking') by checking transaction\nstatus in clog. The user might cancel pg_resolve_foreign_xact() during\nresolution. In this case, the foreign transaction is still status =\n'committing' and in-doubt = 'true'. Then if a foreign transaction\nresolver process processes the foreign transaction, it can commit it\nwithout clog looking.\n\n> Also, isn't it more suitable to name 'status' field\n> as 'state' because these appear to be more like different states of\n> transaction?\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Jun 2020 11:28:51 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 7:59 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 22:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I have another question about\n> > this field, why can't it be one of the status ('preparing',\n> > 'prepared', 'committing', 'aborting', 'in-doubt') rather than having a\n> > separate field?\n>\n> Because I'm using in-doubt field also for checking if the foreign\n> transaction entry can also be resolved manually, i.g.\n> pg_resolve_foreign_xact(). For instance, a foreign transaction which\n> status = 'prepared' and in-doubt = 'true' can be resolved either\n> foreign transaction resolver or pg_resolve_foreign_xact(). When a user\n> execute pg_resolve_foreign_xact() against the foreign transaction, it\n> sets status = 'committing' (or 'rollbacking') by checking transaction\n> status in clog. The user might cancel pg_resolve_foreign_xact() during\n> resolution. In this case, the foreign transaction is still status =\n> 'committing' and in-doubt = 'true'. Then if a foreign transaction\n> resolver process processes the foreign transaction, it can commit it\n> without clog looking.\n>\n\nI think this is a corner case and it is better to simplify the state\nrecording of foreign transactions then to save a CLOG lookup.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jun 2020 09:10:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 12 Jun 2020 at 12:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 12, 2020 at 7:59 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 11 Jun 2020 at 22:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > I have another question about\n> > > this field, why can't it be one of the status ('preparing',\n> > > 'prepared', 'committing', 'aborting', 'in-doubt') rather than having a\n> > > separate field?\n> >\n> > Because I'm using in-doubt field also for checking if the foreign\n> > transaction entry can also be resolved manually, i.g.\n> > pg_resolve_foreign_xact(). For instance, a foreign transaction which\n> > status = 'prepared' and in-doubt = 'true' can be resolved either\n> > foreign transaction resolver or pg_resolve_foreign_xact(). When a user\n> > execute pg_resolve_foreign_xact() against the foreign transaction, it\n> > sets status = 'committing' (or 'rollbacking') by checking transaction\n> > status in clog. The user might cancel pg_resolve_foreign_xact() during\n> > resolution. In this case, the foreign transaction is still status =\n> > 'committing' and in-doubt = 'true'. Then if a foreign transaction\n> > resolver process processes the foreign transaction, it can commit it\n> > without clog looking.\n> >\n>\n> I think this is a corner case and it is better to simplify the state\n> recording of foreign transactions then to save a CLOG lookup.\n>\n\nThe main usage of in-doubt flag is to distinguish between in-doubt\ntransactions and other transactions that have their waiter (I call\non-line transactions). If one foreign server downs for a long time\nafter the server crash during distributed transaction commit, foreign\ntransaction resolver tries to resolve the foreign transaction but\nfails because the foreign server doesn’t respond. We’d like to avoid\nthe situation where a resolver process always picks up that foreign\ntransaction and other on-online transactions waiting to be resolved\ncannot move forward. Therefore, a resolver process prioritizes online\ntransactions. Once the shmem queue having on-line transactions becomes\nempty, a resolver process looks at the array of foreign transaction\nstate to get in-doubt transactions to resolve. I think we should not\nprocess both in-doubt transactions and on-line transactions in the\nsame way.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Jun 2020 13:23:59 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 9:54 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 12 Jun 2020 at 12:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jun 12, 2020 at 7:59 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 11 Jun 2020 at 22:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > > I have another question about\n> > > > this field, why can't it be one of the status ('preparing',\n> > > > 'prepared', 'committing', 'aborting', 'in-doubt') rather than having a\n> > > > separate field?\n> > >\n> > > Because I'm using in-doubt field also for checking if the foreign\n> > > transaction entry can also be resolved manually, i.g.\n> > > pg_resolve_foreign_xact(). For instance, a foreign transaction which\n> > > status = 'prepared' and in-doubt = 'true' can be resolved either\n> > > foreign transaction resolver or pg_resolve_foreign_xact(). When a user\n> > > execute pg_resolve_foreign_xact() against the foreign transaction, it\n> > > sets status = 'committing' (or 'rollbacking') by checking transaction\n> > > status in clog. The user might cancel pg_resolve_foreign_xact() during\n> > > resolution. In this case, the foreign transaction is still status =\n> > > 'committing' and in-doubt = 'true'. Then if a foreign transaction\n> > > resolver process processes the foreign transaction, it can commit it\n> > > without clog looking.\n> > >\n> >\n> > I think this is a corner case and it is better to simplify the state\n> > recording of foreign transactions then to save a CLOG lookup.\n> >\n>\n> The main usage of in-doubt flag is to distinguish between in-doubt\n> transactions and other transactions that have their waiter (I call\n> on-line transactions).\n>\n\nWhich are these other online transactions? I had assumed that foreign\ntransaction resolver process is to resolve in-doubt transactions but\nit seems it is also used for some other purpose which anyway was the\nnext question I had while reviewing other sections of docs but let's\nclarify as it came up now.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jun 2020 12:06:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 12 Jun 2020 at 15:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 12, 2020 at 9:54 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 12 Jun 2020 at 12:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jun 12, 2020 at 7:59 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Thu, 11 Jun 2020 at 22:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > > I have another question about\n> > > > > this field, why can't it be one of the status ('preparing',\n> > > > > 'prepared', 'committing', 'aborting', 'in-doubt') rather than having a\n> > > > > separate field?\n> > > >\n> > > > Because I'm using in-doubt field also for checking if the foreign\n> > > > transaction entry can also be resolved manually, i.g.\n> > > > pg_resolve_foreign_xact(). For instance, a foreign transaction which\n> > > > status = 'prepared' and in-doubt = 'true' can be resolved either\n> > > > foreign transaction resolver or pg_resolve_foreign_xact(). When a user\n> > > > execute pg_resolve_foreign_xact() against the foreign transaction, it\n> > > > sets status = 'committing' (or 'rollbacking') by checking transaction\n> > > > status in clog. The user might cancel pg_resolve_foreign_xact() during\n> > > > resolution. In this case, the foreign transaction is still status =\n> > > > 'committing' and in-doubt = 'true'. Then if a foreign transaction\n> > > > resolver process processes the foreign transaction, it can commit it\n> > > > without clog looking.\n> > > >\n> > >\n> > > I think this is a corner case and it is better to simplify the state\n> > > recording of foreign transactions then to save a CLOG lookup.\n> > >\n> >\n> > The main usage of in-doubt flag is to distinguish between in-doubt\n> > transactions and other transactions that have their waiter (I call\n> > on-line transactions).\n> >\n>\n> Which are these other online transactions? I had assumed that foreign\n> transaction resolver process is to resolve in-doubt transactions but\n> it seems it is also used for some other purpose which anyway was the\n> next question I had while reviewing other sections of docs but let's\n> clarify as it came up now.\n\nWhen a distributed transaction is committed by COMMIT command, the\npostgres backend process prepare all foreign transaction and commit\nthe local transaction. Then the backend enqueue itself to the shmem\nqueue, asks a resolver process for committing the prepared foreign\ntransaction, and wait. That is, these prepared foreign transactions\nare committed by the resolver process, not backend process. Once the\nresolver process committed all prepared foreign transactions, it wakes\nthe waiting backend process. I meant this kind of transaction is\non-line transactions. This procedure is similar to what synchronous\nreplication does.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Jun 2020 17:40:08 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 2:10 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 12 Jun 2020 at 15:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > >\n> > > > I think this is a corner case and it is better to simplify the state\n> > > > recording of foreign transactions then to save a CLOG lookup.\n> > > >\n> > >\n> > > The main usage of in-doubt flag is to distinguish between in-doubt\n> > > transactions and other transactions that have their waiter (I call\n> > > on-line transactions).\n> > >\n> >\n> > Which are these other online transactions? I had assumed that foreign\n> > transaction resolver process is to resolve in-doubt transactions but\n> > it seems it is also used for some other purpose which anyway was the\n> > next question I had while reviewing other sections of docs but let's\n> > clarify as it came up now.\n>\n> When a distributed transaction is committed by COMMIT command, the\n> postgres backend process prepare all foreign transaction and commit\n> the local transaction.\n>\n\nDoes this mean that we will mark the xid as committed in CLOG of the\nlocal server? If so, why is this okay till we commit transactions in\nall the foreign servers, what if we fail to commit on one of the\nservers?\n\nFew more comments on v22-0003-Documentation-update\n--------------------------------------------------------------------------------------\n1.\n+ When <literal>disabled</literal> there can be risk of database\n+ consistency among all servers that involved in the distributed\n+ transaction when some foreign server crashes during committing the\n+ distributed transaction.\n\nWill it read better if rephrase above to something like: \"When\n<literal>disabled</literal> there can be a risk of database\nconsistency if one or more foreign servers crashes while committing\nthe distributed transaction.\"?\n\n2.\n+ <varlistentry\nid=\"guc-foreign-transaction-resolution-rety-interval\"\nxreflabel=\"foreign_transaction_resolution_retry_interval\">\n+ <term><varname>foreign_transaction_resolution_retry_interval</varname>\n(<type>integer</type>)\n+ <indexterm>\n+ <primary><varname>foreign_transaction_resolution_interval</varname>\nconfiguration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Specify how long the foreign transaction resolver should\nwait when the last resolution\n+ fails before retrying to resolve foreign transaction. This\nparameter can only be set in the\n+ <filename>postgresql.conf</filename> file or on the server\ncommand line.\n+ </para>\n+ <para>\n+ The default value is 10 seconds.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nTypo. <varlistentry\nid=\"guc-foreign-transaction-resolution-rety-interval\", spelling of\nretry is wrong. Do we really need such a guc parameter? I think we\ncan come up with some simple algorithm to retry after a few seconds\nand then increase that interval of retry if we fail again or something\nlike that. I don't know how users can come up with some non-default\nvalue for this variable.\n\n3\n+ <varlistentry id=\"guc-foreign-transaction-resolver-timeout\"\nxreflabel=\"foreign_transaction_resolver_timeout\">\n+ <term><varname>foreign_transaction_resolver_timeout</varname>\n(<type>integer</type>)\n+ <indexterm>\n+ <primary><varname>foreign_transaction_resolver_timeout</varname>\nconfiguration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Terminate foreign transaction resolver processes that don't\nhave any foreign\n+ transactions to resolve longer than the specified number of\nmilliseconds.\n+ A value of zero disables the timeout mechanism, meaning it\nconnects to one\n+ database until stopping manually.\n\nCan we mention the function name using which one can stop the resolver process?\n\n4.\n+ Using the <productname>PostgreSQL</productname>'s atomic commit ensures that\n+ all changes on foreign servers end in either commit or rollback using the\n+ transaction callback routines\n\nCan we slightly rephase this \"Using the PostgreSQL's atomic commit\nensures that all the changes on foreign servers are either committed\nor rolled back using the transaction callback routines\"?\n\n5.\n+ Prepare all transactions on foreign servers.\n+ <productname>PostgreSQL</productname> distributed transaction manager\n+ prepares all transaction on the foreign servers if two-phase commit is\n+ required. Two-phase commit is required when the transaction modifies\n+ data on two or more servers including the local server itself and\n+ <xref linkend=\"guc-foreign-twophase-commit\"/> is\n+ <literal>required</literal>.\n\n/PostgreSQL/PostgreSQL's.\n\n If all preparations on foreign servers got\n+ successful go to the next step.\n\nHow about \"If the prepare on all foreign servers is successful then go\nto the next step\"?\n\n Any failure happens in this step,\n+ the server changes to rollback, then rollback all transactions on both\n+ local and foreign servers.\n\nCan we rephrase this line to something like: \"If there is any failure\nin the prepare phase, the server will rollback all the transactions on\nboth local and foreign servers.\"?\n\nWhat if the issued Rollback also failed, say due to network breakdown\nbetween local and one of foreign servers? Shouldn't such a\ntransaction be 'in-doubt' state?\n\n6.\n+ <para>\n+ Commit locally. The server commits transaction locally. Any\nfailure happens\n+ in this step the server changes to rollback, then rollback all\ntransactions\n+ on both local and foreign servers.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ Resolve all prepared transaction on foreign servers. Pprepared\ntransactions\n+ are committed or rolled back according to the result of the\nlocal transaction.\n+ This step is normally performed by a foreign transaction\nresolver process.\n+ </para>\n\nWhen (in which step) do we commit on foreign servers? Do Resolver\nprocesses commit on foreign servers, if so, how can we commit locally\nwithout committing on foreign servers, what if the commit on one of\nthe servers fails? It is not very clear to me from the steps mentioned\nhere? Typo, /Pprepared/Prepared\n\n7.\nHowever, foreign transactions\n+ become <firstterm>in-doubt</firstterm> in three cases: where the foreign\n+ server crashed or lost the connectibility to it during preparing foreign\n+ transaction, where the local node crashed during either preparing or\n+ resolving foreign transaction and where user canceled the query.\n\nHere the three cases are not very clear. You might want to use (a)\n..., (b) .. ,(c).. Also, I think the state will be in-doubt even when\nwe lost connection to server during commit or rollback.\n\n8.\n+ One foreign transaction resolver is responsible for transaction resolutions\n+ on which one database connecting.\n\nCan we rephrase it to: \"One foreign transaction resolver is\nresponsible for transaction resolutions on the database to which it is\nconnected.\"?\n\n9.\n+ Note that other <productname>PostgreSQL</productname> feature\nsuch as parallel\n+ queries, logical replication, etc., also take worker slots from\n+ <varname>max_worker_processes</varname>.\n\n/feature/features\n\n10.\n+ <para>\n+ Atomic commit requires several configuration options to be set.\n+ On the local node, <xref\nlinkend=\"guc-max-prepared-foreign-transactions\"/> and\n+ <xref linkend=\"guc-max-foreign-transaction-resolvers\"/> must be\nnon-zero value.\n+ Additionally the <varname>max_worker_processes</varname> may need\nto be adjusted to\n+ accommodate for foreign transaction resolver workers, at least\n+ (<varname>max_foreign_transaction_resolvers</varname> +\n<literal>1</literal>).\n+ Note that other <productname>PostgreSQL</productname> feature\nsuch as parallel\n+ queries, logical replication, etc., also take worker slots from\n+ <varname>max_worker_processes</varname>.\n+ </para>\n\nDon't we need to mention foreign_twophase_commit GUC here?\n\n11.\n+ <sect2 id=\"fdw-callbacks-transaction-managements\">\n+ <title>FDW Routines For Transaction Managements</title>\n\nManagements/Management?\n\n12.\n+ Transaction management callbacks are used for doing commit, rollback and\n+ prepare the foreign transaction.\n\nLets write the above sentence as: \"Transaction management callbacks\nare used to commit, rollback and prepare the foreign transaction.\"\n\n13.\n+ <para>\n+ Transaction management callbacks are used for doing commit, rollback and\n+ prepare the foreign transaction. If an FDW wishes that its foreign\n+ transaction is managed by <productname>PostgreSQL</productname>'s global\n+ transaction manager it must provide both\n+ <function>CommitForeignTransaction</function> and\n+ <function>RollbackForeignTransaction</function>. In addition, if an FDW\n+ wishes to support <firstterm>atomic commit</firstterm> (as described in\n+ <xref linkend=\"fdw-transaction-managements\"/>), it must provide\n+ <function>PrepareForeignTransaction</function> as well and can provide\n+ <function>GetPrepareId</function> callback optionally.\n+ </para>\n\nWhat exact functionality a FDW can accomplish if it just supports\nCommitForeignTransaction and RollbackForeignTransaction? It seems it\ndoesn't care for 2PC, if so, is there any special functionality we can\nachieve with this which we can't do without these APIs?\n\n14.\n+PrepareForeignTransaction(FdwXactRslvState *frstate);\n+</programlisting>\n+ Prepare the transaction on the foreign server. This function is\ncalled at the\n+ pre-commit phase of the local transactions if foreign twophase commit is\n+ required. This function is used only for distribute transaction management\n+ (see <xref linkend=\"distributed-transaction\"/>).\n+ </para>\n\n/distribute/distributed\n\n15.\n+ <sect2 id=\"fdw-transaction-commit-rollback\">\n+ <title>Commit And Rollback Single Foreign Transaction</title>\n+ <para>\n+ The FDW callback function <literal>CommitForeignTransaction</literal>\n+ and <literal>RollbackForeignTransaction</literal> can be used to commit\n+ and rollback the foreign transaction. During transaction commit, the core\n+ transaction manager calls\n<literal>CommitForeignTransaction</literal> function\n+ in the pre-commit phase and calls\n+ <literal>RollbackForeignTransaction</literal> function in the\npost-rollback\n+ phase.\n+ </para>\n\nThere is no reasoning mentioned as to why CommitForeignTransaction has\nto be called in pre-commit phase and RollbackForeignTransaction in\npost-rollback phase? Basically why one in pre phase and other in post\nphase?\n\n16.\n+ <entry>\n+ <literal><function>pg_remove_foreign_xact(<parameter>transaction</parameter>\n<type>xid</type>, <parameter>serverid</parameter> <type>oid</type>,\n<parameter>userid</parameter> <type>oid</type>)</function></literal>\n+ </entry>\n+ <entry><type>void</type></entry>\n+ <entry>\n+ This function works the same as\n<function>pg_resolve_foreign_xact</function>\n+ except that this removes the foreign transcation entry\nwithout resolution.\n+ </entry>\n\nCan we write why and when such a function can be used? Typo,\n/trasnaction/transaction\n\n17.\n+ <row>\n+ <entry><literal>FdwXactResolutionLock</literal></entry>\n+ <entry>Waiting to read or update information of foreign trasnaction\n+ resolution.</entry>\n+ </row>\n\n/trasnaction/transaction\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jun 2020 15:54:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 12 Jun 2020 at 19:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 12, 2020 at 2:10 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 12 Jun 2020 at 15:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > >\n> > > > > I think this is a corner case and it is better to simplify the state\n> > > > > recording of foreign transactions then to save a CLOG lookup.\n> > > > >\n> > > >\n> > > > The main usage of in-doubt flag is to distinguish between in-doubt\n> > > > transactions and other transactions that have their waiter (I call\n> > > > on-line transactions).\n> > > >\n> > >\n> > > Which are these other online transactions? I had assumed that foreign\n> > > transaction resolver process is to resolve in-doubt transactions but\n> > > it seems it is also used for some other purpose which anyway was the\n> > > next question I had while reviewing other sections of docs but let's\n> > > clarify as it came up now.\n> >\n> > When a distributed transaction is committed by COMMIT command, the\n> > postgres backend process prepare all foreign transaction and commit\n> > the local transaction.\n> >\n\nThank you for your review comments! Let me answer your question first.\nI'll see the review comments.\n\n>\n> Does this mean that we will mark the xid as committed in CLOG of the\n> local server?\n\nWell what I meant is that when the client executes COMMIT command, the\nbackend executes PREPARE TRANSACTION command on all involved foreign\nservers and then marks the xid as committed in clog in the local\nserver.\n\n> If so, why is this okay till we commit transactions in\n> all the foreign servers, what if we fail to commit on one of the\n> servers?\n\nOnce the local transaction is committed, all involved foreign\ntransactions never be rolled back. The backend already prepared all\nforeign transaction before local commit, committing prepared foreign\ntransaction basically doesn't fail. But even if it fails for whatever\nreason, we never rollback the all prepared foreign transactions. A\nresolver tries to commit foreign transactions at certain intervals.\nDoes it answer your question?\n\nRegard,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Jun 2020 21:53:50 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 12, 2020 at 6:24 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 12 Jun 2020 at 19:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > > Which are these other online transactions? I had assumed that foreign\n> > > > transaction resolver process is to resolve in-doubt transactions but\n> > > > it seems it is also used for some other purpose which anyway was the\n> > > > next question I had while reviewing other sections of docs but let's\n> > > > clarify as it came up now.\n> > >\n> > > When a distributed transaction is committed by COMMIT command, the\n> > > postgres backend process prepare all foreign transaction and commit\n> > > the local transaction.\n> > >\n>\n> Thank you for your review comments! Let me answer your question first.\n> I'll see the review comments.\n>\n> >\n> > Does this mean that we will mark the xid as committed in CLOG of the\n> > local server?\n>\n> Well what I meant is that when the client executes COMMIT command, the\n> backend executes PREPARE TRANSACTION command on all involved foreign\n> servers and then marks the xid as committed in clog in the local\n> server.\n>\n\nWon't it create an inconsistency in viewing the data from the\ndifferent servers? Say, such a transaction inserts one row into a\nlocal server and another into the foreign server. Now, if we follow\nthe above protocol, the user will be able to see the row from the\nlocal server but not from the foreign server.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Jun 2020 10:32:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sat, 13 Jun 2020 at 14:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 12, 2020 at 6:24 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 12 Jun 2020 at 19:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > > Which are these other online transactions? I had assumed that foreign\n> > > > > transaction resolver process is to resolve in-doubt transactions but\n> > > > > it seems it is also used for some other purpose which anyway was the\n> > > > > next question I had while reviewing other sections of docs but let's\n> > > > > clarify as it came up now.\n> > > >\n> > > > When a distributed transaction is committed by COMMIT command, the\n> > > > postgres backend process prepare all foreign transaction and commit\n> > > > the local transaction.\n> > > >\n> >\n> > Thank you for your review comments! Let me answer your question first.\n> > I'll see the review comments.\n> >\n> > >\n> > > Does this mean that we will mark the xid as committed in CLOG of the\n> > > local server?\n> >\n> > Well what I meant is that when the client executes COMMIT command, the\n> > backend executes PREPARE TRANSACTION command on all involved foreign\n> > servers and then marks the xid as committed in clog in the local\n> > server.\n> >\n>\n> Won't it create an inconsistency in viewing the data from the\n> different servers? Say, such a transaction inserts one row into a\n> local server and another into the foreign server. Now, if we follow\n> the above protocol, the user will be able to see the row from the\n> local server but not from the foreign server.\n\nYes, you're right. This atomic commit feature doesn't guarantee such\nconsistent visibility so-called atomic visibility. Even the local\nserver is not modified, since a resolver process commits prepared\nforeign transactions one by one another user could see an inconsistent\nresult. Providing globally consistent snapshots to transactions\ninvolving foreign servers is one of the solutions.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 14 Jun 2020 08:29:43 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": ">> Won't it create an inconsistency in viewing the data from the\n>> different servers? Say, such a transaction inserts one row into a\n>> local server and another into the foreign server. Now, if we follow\n>> the above protocol, the user will be able to see the row from the\n>> local server but not from the foreign server.\n> \n> Yes, you're right. This atomic commit feature doesn't guarantee such\n> consistent visibility so-called atomic visibility. Even the local\n> server is not modified, since a resolver process commits prepared\n> foreign transactions one by one another user could see an inconsistent\n> result. Providing globally consistent snapshots to transactions\n> involving foreign servers is one of the solutions.\n\nAnother approach to the atomic visibility problem is to control\nsnapshot acquisition timing and commit timing (plus using REPEATABLE\nREAD). In the REPEATABLE READ transaction isolation level, PostgreSQL\nassigns a snapshot at the time when the first command is executed in a\ntransaction. If we could prevent any commit while any transaction is\nacquiring snapshot, and we could prevent any snapshot acquisition while\ncommitting, visibility inconsistency which Amit explained can be\navoided.\n\nThis approach was proposed in a academic paper [1].\n\nGood point with the approach is, we don't need to modify PostgreSQL at\nall.\n\nDownside of the approach is, we need someone who controls the timings\n(in [1], a middleware called \"Pangea\" was proposed). Also we need to\nlimit the transaction isolation level to REPEATABLE READ.\n\n[1] http://www.vldb.org/pvldb/vol2/vldb09-694.pdf\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 14 Jun 2020 17:51:14 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Sun, Jun 14, 2020 at 2:21 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> >> Won't it create an inconsistency in viewing the data from the\n> >> different servers? Say, such a transaction inserts one row into a\n> >> local server and another into the foreign server. Now, if we follow\n> >> the above protocol, the user will be able to see the row from the\n> >> local server but not from the foreign server.\n> >\n> > Yes, you're right. This atomic commit feature doesn't guarantee such\n> > consistent visibility so-called atomic visibility.\n\nOkay, I understand that the purpose of this feature is to provide\natomic commit which means the transaction on all servers involved will\neither commit or rollback. However, I think we should at least see at\na high level how the visibility will work because it might influence\nthe implementation of this feature.\n\n> > Even the local\n> > server is not modified, since a resolver process commits prepared\n> > foreign transactions one by one another user could see an inconsistent\n> > result. Providing globally consistent snapshots to transactions\n> > involving foreign servers is one of the solutions.\n\nHow would it be able to do that? Say, when it decides to take a\nsnapshot the transaction on the foreign server appears to be committed\nbut the transaction on the local server won't appear to be committed,\nso the consistent data visibility problem as mentioned above could\nstill arise.\n\n>\n> Another approach to the atomic visibility problem is to control\n> snapshot acquisition timing and commit timing (plus using REPEATABLE\n> READ). In the REPEATABLE READ transaction isolation level, PostgreSQL\n> assigns a snapshot at the time when the first command is executed in a\n> transaction. If we could prevent any commit while any transaction is\n> acquiring snapshot, and we could prevent any snapshot acquisition while\n> committing, visibility inconsistency which Amit explained can be\n> avoided.\n>\n\nI think the problem mentioned above can occur with this as well or if\nI am missing something then can you explain in further detail how it\nwon't create problem in the scenario I have used above?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jun 2020 11:50:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": ">> Another approach to the atomic visibility problem is to control\n>> snapshot acquisition timing and commit timing (plus using REPEATABLE\n>> READ). In the REPEATABLE READ transaction isolation level, PostgreSQL\n>> assigns a snapshot at the time when the first command is executed in a\n>> transaction. If we could prevent any commit while any transaction is\n>> acquiring snapshot, and we could prevent any snapshot acquisition while\n>> committing, visibility inconsistency which Amit explained can be\n>> avoided.\n>>\n> \n> I think the problem mentioned above can occur with this as well or if\n> I am missing something then can you explain in further detail how it\n> won't create problem in the scenario I have used above?\n\nSo the problem you mentioned above is like this? (S1/S2 denotes\ntransactions (sessions), N1/N2 is the postgreSQL servers). Since S1\nalready committed on N1, S2 sees the row on N1. However S2 does not\nsee the row on N2 since S1 has not committed on N2 yet.\n\nS1/N1: DROP TABLE t1;\nDROP TABLE\nS1/N1: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N2: DROP TABLE t1;\nDROP TABLE\nS1/N2: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N1: BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;\nBEGIN\nS1/N2: BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;\nBEGIN\nS2/N1: BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;\nBEGIN\nS1/N1: INSERT INTO t1 VALUES (1);\nINSERT 0 1\nS1/N2: INSERT INTO t1 VALUES (1);\nINSERT 0 1\nS1/N1: PREPARE TRANSACTION 's1n1';\nPREPARE TRANSACTION\nS1/N2: PREPARE TRANSACTION 's1n2';\nPREPARE TRANSACTION\nS2/N1: PREPARE TRANSACTION 's2n1';\nPREPARE TRANSACTION\nS1/N1: COMMIT PREPARED 's1n1';\nCOMMIT PREPARED\nS2/N1: SELECT * FROM t1; -- see the row\n i \n---\n 1\n(1 row)\n\nS2/N2: SELECT * FROM t1; -- doesn't see the row\n i \n---\n(0 rows)\n\nS1/N2: COMMIT PREPARED 's1n2';\nCOMMIT PREPARED\nS2/N1: COMMIT PREPARED 's2n1';\nCOMMIT PREPARED\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 15 Jun 2020 16:00:16 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 12:30 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> >> Another approach to the atomic visibility problem is to control\n> >> snapshot acquisition timing and commit timing (plus using REPEATABLE\n> >> READ). In the REPEATABLE READ transaction isolation level, PostgreSQL\n> >> assigns a snapshot at the time when the first command is executed in a\n> >> transaction. If we could prevent any commit while any transaction is\n> >> acquiring snapshot, and we could prevent any snapshot acquisition while\n> >> committing, visibility inconsistency which Amit explained can be\n> >> avoided.\n> >>\n> >\n> > I think the problem mentioned above can occur with this as well or if\n> > I am missing something then can you explain in further detail how it\n> > won't create problem in the scenario I have used above?\n>\n> So the problem you mentioned above is like this? (S1/S2 denotes\n> transactions (sessions), N1/N2 is the postgreSQL servers). Since S1\n> already committed on N1, S2 sees the row on N1. However S2 does not\n> see the row on N2 since S1 has not committed on N2 yet.\n>\n\nYeah, something on these lines but S2 can execute the query on N1\ndirectly which should fetch the data from both N1 and N2. Even if\nthere is a solution using REPEATABLE READ isolation level we might not\nprefer to use that as the only level for distributed transactions, it\nmight be too costly but let us first see how does it solve the\nproblem?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jun 2020 16:32:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, 15 Jun 2020 at 15:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jun 14, 2020 at 2:21 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> >\n> > >> Won't it create an inconsistency in viewing the data from the\n> > >> different servers? Say, such a transaction inserts one row into a\n> > >> local server and another into the foreign server. Now, if we follow\n> > >> the above protocol, the user will be able to see the row from the\n> > >> local server but not from the foreign server.\n> > >\n> > > Yes, you're right. This atomic commit feature doesn't guarantee such\n> > > consistent visibility so-called atomic visibility.\n>\n> Okay, I understand that the purpose of this feature is to provide\n> atomic commit which means the transaction on all servers involved will\n> either commit or rollback. However, I think we should at least see at\n> a high level how the visibility will work because it might influence\n> the implementation of this feature.\n>\n> > > Even the local\n> > > server is not modified, since a resolver process commits prepared\n> > > foreign transactions one by one another user could see an inconsistent\n> > > result. Providing globally consistent snapshots to transactions\n> > > involving foreign servers is one of the solutions.\n>\n> How would it be able to do that? Say, when it decides to take a\n> snapshot the transaction on the foreign server appears to be committed\n> but the transaction on the local server won't appear to be committed,\n> so the consistent data visibility problem as mentioned above could\n> still arise.\n\nThere are many solutions. For instance, in Postgres-XC/X2 (and maybe\nXL), there is a GTM node that is responsible for providing global\ntransaction IDs (GXID) and globally consistent snapshots. All\ntransactions need to access GTM when checking the distributed\ntransaction status as well as starting transactions and ending\ntransactions. IIUC if a global transaction accesses a tuple whose GXID\nis included in its global snapshot it waits for that transaction to be\ncommitted or rolled back.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 15 Jun 2020 22:35:49 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 7:06 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 15 Jun 2020 at 15:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > > > Even the local\n> > > > server is not modified, since a resolver process commits prepared\n> > > > foreign transactions one by one another user could see an inconsistent\n> > > > result. Providing globally consistent snapshots to transactions\n> > > > involving foreign servers is one of the solutions.\n> >\n> > How would it be able to do that? Say, when it decides to take a\n> > snapshot the transaction on the foreign server appears to be committed\n> > but the transaction on the local server won't appear to be committed,\n> > so the consistent data visibility problem as mentioned above could\n> > still arise.\n>\n> There are many solutions. For instance, in Postgres-XC/X2 (and maybe\n> XL), there is a GTM node that is responsible for providing global\n> transaction IDs (GXID) and globally consistent snapshots. All\n> transactions need to access GTM when checking the distributed\n> transaction status as well as starting transactions and ending\n> transactions. IIUC if a global transaction accesses a tuple whose GXID\n> is included in its global snapshot it waits for that transaction to be\n> committed or rolled back.\n>\n\nIs there some mapping between GXID and XIDs allocated for each node or\nwill each node use the GXID as XID to modify the data? Are we fine\nwith parking the work for global snapshots and atomic visibility to a\nseparate patch and just proceed with the design proposed by this\npatch? I am asking because I thought there might be some impact on\nthe design of this patch based on what we decide for that work.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jun 2020 15:39:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 3:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 15, 2020 at 7:06 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Mon, 15 Jun 2020 at 15:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > > > Even the local\n> > > > > server is not modified, since a resolver process commits prepared\n> > > > > foreign transactions one by one another user could see an inconsistent\n> > > > > result. Providing globally consistent snapshots to transactions\n> > > > > involving foreign servers is one of the solutions.\n> > >\n> > > How would it be able to do that? Say, when it decides to take a\n> > > snapshot the transaction on the foreign server appears to be committed\n> > > but the transaction on the local server won't appear to be committed,\n> > > so the consistent data visibility problem as mentioned above could\n> > > still arise.\n> >\n> > There are many solutions. For instance, in Postgres-XC/X2 (and maybe\n> > XL), there is a GTM node that is responsible for providing global\n> > transaction IDs (GXID) and globally consistent snapshots. All\n> > transactions need to access GTM when checking the distributed\n> > transaction status as well as starting transactions and ending\n> > transactions. IIUC if a global transaction accesses a tuple whose GXID\n> > is included in its global snapshot it waits for that transaction to be\n> > committed or rolled back.\n> >\n>\n> Is there some mapping between GXID and XIDs allocated for each node or\n> will each node use the GXID as XID to modify the data? Are we fine\n> with parking the work for global snapshots and atomic visibility to a\n> separate patch and just proceed with the design proposed by this\n> patch?\n\nDistributed transaction involves, atomic commit, atomic visibility\nand global consistency. 2PC is the only practical solution for atomic\ncommit. There are some improvements over 2PC but those are add ons to\nthe basic 2PC, which is what this patch provides. Atomic visibility\nand global consistency however have alternative solutions but all of\nthose solutions require 2PC to be supported. Each of those are large\npieces of work and trying to get everything in may not work. Once we\nhave basic 2PC in place, there will be a ground to experiment with\nsolutions for global consistency and atomic visibility. If we manage\nto do it right, we could make it pluggable as well. So, I think we\nshould concentrate on supporting basic 2PC work now.\n\n> I am asking because I thought there might be some impact on\n> the design of this patch based on what we decide for that work.\n>\n\nSince 2PC is at the heart of any distributed transaction system, the\nimpact will be low. Figuring all of that, without having basic 2PC,\nwill be very hard.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 16 Jun 2020 18:42:52 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 12 Jun 2020 at 19:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\nThank you for your reviews on 0003 patch. I've incorporated your\ncomments. I'll submit the latest version patch later as the design or\nscope might change as a result of the discussion.\n\n>\n> Few more comments on v22-0003-Documentation-update\n> --------------------------------------------------------------------------------------\n> 1.\n> + When <literal>disabled</literal> there can be risk of database\n> + consistency among all servers that involved in the distributed\n> + transaction when some foreign server crashes during committing the\n> + distributed transaction.\n>\n> Will it read better if rephrase above to something like: \"When\n> <literal>disabled</literal> there can be a risk of database\n> consistency if one or more foreign servers crashes while committing\n> the distributed transaction.\"?\n\nFixed.\n\n>\n> 2.\n> + <varlistentry\n> id=\"guc-foreign-transaction-resolution-rety-interval\"\n> xreflabel=\"foreign_transaction_resolution_retry_interval\">\n> + <term><varname>foreign_transaction_resolution_retry_interval</varname>\n> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>foreign_transaction_resolution_interval</varname>\n> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specify how long the foreign transaction resolver should\n> wait when the last resolution\n> + fails before retrying to resolve foreign transaction. This\n> parameter can only be set in the\n> + <filename>postgresql.conf</filename> file or on the server\n> command line.\n> + </para>\n> + <para>\n> + The default value is 10 seconds.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n> Typo. <varlistentry\n> id=\"guc-foreign-transaction-resolution-rety-interval\", spelling of\n> retry is wrong. Do we really need such a guc parameter? I think we\n> can come up with some simple algorithm to retry after a few seconds\n> and then increase that interval of retry if we fail again or something\n> like that. I don't know how users can come up with some non-default\n> value for this variable.\n\nFor example, in a low-reliable network environment, setting lower\nvalue would help to minimize the backend wait time in case of\nconnection lost. But I also agree with your point. In terms of\nimplementation, having backends wait for the fixed time is more simple\nbut we can do such incremental interval by remembering the retry count\nfor each foreign transaction.\n\nAn open question regarding retrying foreign transaction resolution is\nhow we process the case where an involved foreign server is down for a\nvery long. If an online transaction is waiting to be resolved, there\nis no way to exit from the wait loop other than either the user sends\na cancel request or the crashed server is restored. But if the foreign\nserver has to be down for a long time, I think it’s not practical to\nsend a cancel request because the client would need something like a\ntimeout mechanism. So I think it might be better to provide a way to\ncancel the waiting without the user sending a cancel. For example,\nhaving a timeout or having the limit of the retry count. If an\nin-doubt transaction is waiting to be resolved, we keep trying to\nresolve the foreign transaction at an interval. But I wonder if the\nuser might want to disable the automatic in-doubt foreign transaction\nin some cases, for example, where the user knows the crashed server\nwill not be restored for a long time. I’m thinking that we can provide\na way to disable automatic foreign transaction resolution or disable\nit for the particular foreign transaction.\n\n>\n> 3\n> + <varlistentry id=\"guc-foreign-transaction-resolver-timeout\"\n> xreflabel=\"foreign_transaction_resolver_timeout\">\n> + <term><varname>foreign_transaction_resolver_timeout</varname>\n> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>foreign_transaction_resolver_timeout</varname>\n> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Terminate foreign transaction resolver processes that don't\n> have any foreign\n> + transactions to resolve longer than the specified number of\n> milliseconds.\n> + A value of zero disables the timeout mechanism, meaning it\n> connects to one\n> + database until stopping manually.\n>\n> Can we mention the function name using which one can stop the resolver process?\n\nFixed.\n\n>\n> 4.\n> + Using the <productname>PostgreSQL</productname>'s atomic commit ensures that\n> + all changes on foreign servers end in either commit or rollback using the\n> + transaction callback routines\n>\n> Can we slightly rephase this \"Using the PostgreSQL's atomic commit\n> ensures that all the changes on foreign servers are either committed\n> or rolled back using the transaction callback routines\"?\n\nFixed.\n\n>\n> 5.\n> + Prepare all transactions on foreign servers.\n> + <productname>PostgreSQL</productname> distributed transaction manager\n> + prepares all transaction on the foreign servers if two-phase commit is\n> + required. Two-phase commit is required when the transaction modifies\n> + data on two or more servers including the local server itself and\n> + <xref linkend=\"guc-foreign-twophase-commit\"/> is\n> + <literal>required</literal>.\n>\n> /PostgreSQL/PostgreSQL's.\n\nFixed.\n\n>\n> If all preparations on foreign servers got\n> + successful go to the next step.\n>\n> How about \"If the prepare on all foreign servers is successful then go\n> to the next step\"?\n\nFixed.\n\n>\n> Any failure happens in this step,\n> + the server changes to rollback, then rollback all transactions on both\n> + local and foreign servers.\n>\n> Can we rephrase this line to something like: \"If there is any failure\n> in the prepare phase, the server will rollback all the transactions on\n> both local and foreign servers.\"?\n\nFixed.\n\n>\n> What if the issued Rollback also failed, say due to network breakdown\n> between local and one of foreign servers? Shouldn't such a\n> transaction be 'in-doubt' state?\n\nRollback API to rollback transaction in one-phase can be called\nrecursively. So FDWs have to tolerate recursive calling.\n\nIn the current patch, all transaction operations are performed\nsynchronously. That is, foreign transaction never becomes in-doubt\nstate without explicit cancel by the user or the local node crash.\nThat way, subsequent transactions can assume that precedent\ndistributed transactions are already resolved unless the user\ncanceled.\n\nLet me explain the details:\n\nIf the transaction turns rollback due to failure before the local\ncommit, we attempt to do both ROLLBACK and ROLLBACK PREPARED against\nforeign transactions whose status is PREPARING. That is, we end the\nforeign transactions by doing ROLLBACK. And since we're not sure\npreparation has been completed on the foreign server the backend asks\nthe resolver process for doing ROLLBACK PREPARED on the foreign\nservers. Therefore FDWs have to tolerate OBJECT_NOT_FOUND error in\nabort case. Since the backend process returns an acknowledgment to the\nclient only after rolling back all foreign transactions, these foreign\ntransactional don't remain as in-doubt state.\n\nIf rolling back failed after the local commit (i.g., the client does\nROLLBACK and the resolver failed to do ROLLBACK PREPARED), a resolver\nprocess will relaunch and retry to do ROLLBACK PREPARED. The backend\nprocess waits until ROLLBACK PREPARED is successfully done or the user\ncancels. So the foreign transactions don't become in-doubt\ntransactions.\n\nSynchronousness is also an open question. If we want to support atomic\ncommit in an asynchronous manner it might be better to implement it\nfirst in terms of complexity. The backend returns an acknowledgment to\nthe client immediately after asking the resolver process. It’s known\nas the early acknowledgment technique. The downside is that the user\nwho wants to see the result of precedent transaction needs to make\nsure the precedent transaction is committed on all foreign servers. We\nwill also need to think about how to control it by GUC parameter when\nwe have synchronous distributed transaction commit. Perhaps it’s\nbetter to control it independent of synchronous replication.\n\n>\n> 6.\n> + <para>\n> + Commit locally. The server commits transaction locally. Any\n> failure happens\n> + in this step the server changes to rollback, then rollback all\n> transactions\n> + on both local and foreign servers.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + Resolve all prepared transaction on foreign servers. Pprepared\n> transactions\n> + are committed or rolled back according to the result of the\n> local transaction.\n> + This step is normally performed by a foreign transaction\n> resolver process.\n> + </para>\n>\n> When (in which step) do we commit on foreign servers? Do Resolver\n> processes commit on foreign servers, if so, how can we commit locally\n> without committing on foreign servers, what if the commit on one of\n> the servers fails? It is not very clear to me from the steps mentioned\n> here?\n\nIn case 2pc is required, we commit transactions on foreign servers at\nthe final step by the resolver process. If the committing a prepared\ntransaction on one of the servers fails, a resolver process relaunches\nafter an interval and retry to commit.\n\nIn case 2pc is not required, we commit transactions on foreign servers\nat pre-commit phase by the backend.\n\n> Typo, /Pprepared/Prepared\n\nFixed.\n\n>\n> 7.\n> However, foreign transactions\n> + become <firstterm>in-doubt</firstterm> in three cases: where the foreign\n> + server crashed or lost the connectibility to it during preparing foreign\n> + transaction, where the local node crashed during either preparing or\n> + resolving foreign transaction and where user canceled the query.\n>\n> Here the three cases are not very clear. You might want to use (a)\n> ..., (b) .. ,(c)..\n\nFixed. I change it to itemizedlist.\n\n> Also, I think the state will be in-doubt even when\n> we lost connection to server during commit or rollback.\n\nLet me correct the cases of the foreign transactions remain as\nin-doubt state. There are two cases:\n\n* The local node crashed\n* The user canceled the transaction commit or rollback.\n\nEven when we lost connection to the server during commit or rollback\nprepared transaction, a backend doesn’t return an acknowledgment to\nthe client until either transaction is successfully resolved, the user\ncancels the transaction, or the local node crashes.\n\n>\n> 8.\n> + One foreign transaction resolver is responsible for transaction resolutions\n> + on which one database connecting.\n>\n> Can we rephrase it to: \"One foreign transaction resolver is\n> responsible for transaction resolutions on the database to which it is\n> connected.\"?\n\nFixed.\n\n>\n> 9.\n> + Note that other <productname>PostgreSQL</productname> feature\n> such as parallel\n> + queries, logical replication, etc., also take worker slots from\n> + <varname>max_worker_processes</varname>.\n>\n> /feature/features\n\nFixed.\n\n>\n> 10.\n> + <para>\n> + Atomic commit requires several configuration options to be set.\n> + On the local node, <xref\n> linkend=\"guc-max-prepared-foreign-transactions\"/> and\n> + <xref linkend=\"guc-max-foreign-transaction-resolvers\"/> must be\n> non-zero value.\n> + Additionally the <varname>max_worker_processes</varname> may need\n> to be adjusted to\n> + accommodate for foreign transaction resolver workers, at least\n> + (<varname>max_foreign_transaction_resolvers</varname> +\n> <literal>1</literal>).\n> + Note that other <productname>PostgreSQL</productname> feature\n> such as parallel\n> + queries, logical replication, etc., also take worker slots from\n> + <varname>max_worker_processes</varname>.\n> + </para>\n>\n> Don't we need to mention foreign_twophase_commit GUC here?\n\nFixed.\n\n>\n> 11.\n> + <sect2 id=\"fdw-callbacks-transaction-managements\">\n> + <title>FDW Routines For Transaction Managements</title>\n>\n> Managements/Management?\n\nFixed.\n\n>\n> 12.\n> + Transaction management callbacks are used for doing commit, rollback and\n> + prepare the foreign transaction.\n>\n> Lets write the above sentence as: \"Transaction management callbacks\n> are used to commit, rollback and prepare the foreign transaction.\"\n\nFixed.\n\n>\n> 13.\n> + <para>\n> + Transaction management callbacks are used for doing commit, rollback and\n> + prepare the foreign transaction. If an FDW wishes that its foreign\n> + transaction is managed by <productname>PostgreSQL</productname>'s global\n> + transaction manager it must provide both\n> + <function>CommitForeignTransaction</function> and\n> + <function>RollbackForeignTransaction</function>. In addition, if an FDW\n> + wishes to support <firstterm>atomic commit</firstterm> (as described in\n> + <xref linkend=\"fdw-transaction-managements\"/>), it must provide\n> + <function>PrepareForeignTransaction</function> as well and can provide\n> + <function>GetPrepareId</function> callback optionally.\n> + </para>\n>\n> What exact functionality a FDW can accomplish if it just supports\n> CommitForeignTransaction and RollbackForeignTransaction? It seems it\n> doesn't care for 2PC, if so, is there any special functionality we can\n> achieve with this which we can't do without these APIs?\n\nThere is no special functionality even if an FDW implements\nCommitForeignTrasnaction and RollbackForeignTransaction. Currently,\nsince there is no transaction API in FDW APIs, FDW developer has to\nuse XactCallback to control transactions but there is no\ndocumentation. The idea of allowing an FDW to support only\nCommitForeignTrasnaction and RollbackForeignTransaction is that FDW\ndevelopers can implement transaction management easily. But in the\nfirst patch, we also can disallow it to make the implementation\nsimple.\n\n>\n> 14.\n> +PrepareForeignTransaction(FdwXactRslvState *frstate);\n> +</programlisting>\n> + Prepare the transaction on the foreign server. This function is\n> called at the\n> + pre-commit phase of the local transactions if foreign twophase commit is\n> + required. This function is used only for distribute transaction management\n> + (see <xref linkend=\"distributed-transaction\"/>).\n> + </para>\n>\n> /distribute/distributed\n\nFixed.\n\n>\n> 15.\n> + <sect2 id=\"fdw-transaction-commit-rollback\">\n> + <title>Commit And Rollback Single Foreign Transaction</title>\n> + <para>\n> + The FDW callback function <literal>CommitForeignTransaction</literal>\n> + and <literal>RollbackForeignTransaction</literal> can be used to commit\n> + and rollback the foreign transaction. During transaction commit, the core\n> + transaction manager calls\n> <literal>CommitForeignTransaction</literal> function\n> + in the pre-commit phase and calls\n> + <literal>RollbackForeignTransaction</literal> function in the\n> post-rollback\n> + phase.\n> + </para>\n>\n> There is no reasoning mentioned as to why CommitForeignTransaction has\n> to be called in pre-commit phase and RollbackForeignTransaction in\n> post-rollback phase? Basically why one in pre phase and other in post\n> phase?\n\nGood point. This behavior just follows what postgres_fdw does. I'm not\nsure the exact reason why postgres_fdw commit the transaction in\npre-commit phase but I guess the committing a foreign transaction is\nlikely to abort comparing to the local commit, it might be better to\ndo first.\n\n>\n> 16.\n> + <entry>\n> + <literal><function>pg_remove_foreign_xact(<parameter>transaction</parameter>\n> <type>xid</type>, <parameter>serverid</parameter> <type>oid</type>,\n> <parameter>userid</parameter> <type>oid</type>)</function></literal>\n> + </entry>\n> + <entry><type>void</type></entry>\n> + <entry>\n> + This function works the same as\n> <function>pg_resolve_foreign_xact</function>\n> + except that this removes the foreign transcation entry\n> without resolution.\n> + </entry>\n>\n> Can we write why and when such a function can be used? Typo,\n> /trasnaction/transaction\n\nFixed.\n\n>\n> 17.\n> + <row>\n> + <entry><literal>FdwXactResolutionLock</literal></entry>\n> + <entry>Waiting to read or update information of foreign trasnaction\n> + resolution.</entry>\n> + </row>\n>\n> /trasnaction/transaction\n\nFixed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jun 2020 22:43:58 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": ">> > I think the problem mentioned above can occur with this as well or if\n>> > I am missing something then can you explain in further detail how it\n>> > won't create problem in the scenario I have used above?\n>>\n>> So the problem you mentioned above is like this? (S1/S2 denotes\n>> transactions (sessions), N1/N2 is the postgreSQL servers). Since S1\n>> already committed on N1, S2 sees the row on N1. However S2 does not\n>> see the row on N2 since S1 has not committed on N2 yet.\n>>\n> \n> Yeah, something on these lines but S2 can execute the query on N1\n> directly which should fetch the data from both N1 and N2.\n\nThe algorythm assumes that any client should access database through a\nmiddle ware. Such direct access is prohibited.\n\n> Even if\n> there is a solution using REPEATABLE READ isolation level we might not\n> prefer to use that as the only level for distributed transactions, it\n> might be too costly but let us first see how does it solve the\n> problem?\n\nThe paper extends Snapshot Isolation (SI, which is same as our\nREPEATABLE READ isolation level) to \"Global Snapshot Isolation\", GSI).\nI think GSI will solve the problem (atomic visibility) we are\ndiscussing.\n\nUnlike READ COMMITTED, REPEATABLE READ acquires snapshot at the time\nwhen the first command is executed in a transaction (READ COMMITTED\nacquires a snapshot at each command in a transaction). Pangea controls\nthe timing of the snapshot acquisition on pair of transactions\n(S1/N1,N2 or S2/N1,N2) so that each pair acquires the same\nsnapshot. To achieve this, while some transactions are trying to\nacquire snapshot, any commit operation should be postponed. Likewise\nany snapshot acquisition should wait until any in progress commit\noperations are finished (see Algorithm I to III in the paper for more\ndetails). With this rule, the previous example now looks like this:\nyou can see SELECT on S2/N1 and S2/N2 give the same result.\n\nS1/N1: DROP TABLE t1;\nDROP TABLE\nS1/N1: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N2: DROP TABLE t1;\nDROP TABLE\nS1/N2: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N1: BEGIN;\nBEGIN\nS1/N2: BEGIN;\nBEGIN\nS2/N1: BEGIN;\nBEGIN\nS1/N1: SET transaction_isolation TO 'repeatable read';\nSET\nS1/N2: SET transaction_isolation TO 'repeatable read';\nSET\nS2/N1: SET transaction_isolation TO 'repeatable read';\nSET\nS1/N1: INSERT INTO t1 VALUES (1);\nINSERT 0 1\nS1/N2: INSERT INTO t1 VALUES (1);\nINSERT 0 1\nS2/N1: SELECT * FROM t1;\n i \n---\n(0 rows)\n\nS2/N2: SELECT * FROM t1;\n i \n---\n(0 rows)\n\nS1/N1: PREPARE TRANSACTION 's1n1';\nPREPARE TRANSACTION\nS1/N2: PREPARE TRANSACTION 's1n2';\nPREPARE TRANSACTION\nS2/N1: PREPARE TRANSACTION 's2n1';\nPREPARE TRANSACTION\nS1/N1: COMMIT PREPARED 's1n1';\nCOMMIT PREPARED\nS1/N2: COMMIT PREPARED 's1n2';\nCOMMIT PREPARED\nS2/N1: COMMIT PREPARED 's2n1';\nCOMMIT PREPARED\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 16 Jun 2020 23:36:17 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 06:42:52PM +0530, Ashutosh Bapat wrote:\n> > Is there some mapping between GXID and XIDs allocated for each node or\n> > will each node use the GXID as XID to modify the data? Are we fine\n> > with parking the work for global snapshots and atomic visibility to a\n> > separate patch and just proceed with the design proposed by this\n> > patch?\n> \n> Distributed transaction involves, atomic commit, atomic visibility\n> and global consistency. 2PC is the only practical solution for atomic\n> commit. There are some improvements over 2PC but those are add ons to\n> the basic 2PC, which is what this patch provides. Atomic visibility\n> and global consistency however have alternative solutions but all of\n> those solutions require 2PC to be supported. Each of those are large\n> pieces of work and trying to get everything in may not work. Once we\n> have basic 2PC in place, there will be a ground to experiment with\n> solutions for global consistency and atomic visibility. If we manage\n> to do it right, we could make it pluggable as well. So, I think we\n> should concentrate on supporting basic 2PC work now.\n\nVery good summary, thank you.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 16 Jun 2020 12:54:06 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "> I've attached the new version patch set. 0006 is a separate patch\n> which introduces 'prefer' mode to foreign_twophase_commit.\n\nI hope we can use this feature. Thank you for making patches and \ndiscussions.\nI'm currently understanding the logic and found some minor points to be \nfixed.\n\nI'm sorry if my understanding is wrong.\n\n* The v22 patches need rebase as they can't apply to the current master.\n\n* FdwXactAtomicCommitParticipants said in \nsrc/backend/access/fdwxact/README\n is not implemented. Is FdwXactParticipants right?\n\n* A following comment says that this code is for \"One-phase\",\n but second argument of FdwXactParticipantEndTransaction() describes\n this code is not \"onephase\".\n\nAtEOXact_FdwXact() in fdwxact.c\n\t/* One-phase rollback foreign transaction */\n\tFdwXactParticipantEndTransaction(fdw_part, false, false);\n\nstatic void\nFdwXactParticipantEndTransaction(FdwXactParticipant *fdw_part, bool \nonephase,\n\tbool for_commit)\n\n* \"two_phase_commit\" option is mentioned in postgres-fdw.sgml,\n but I can't find related code.\n\n* resolver.c comments have the sentence\n containing two blanks.(Emergency Termination)\n\n* There are some inconsistency with PostgreSQL wiki.\nhttps://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n\n I understand it's difficult to keep consistency, I think it's ok to \nfix later\n when these patches almost be able to be committed.\n\n - I can't find \"two_phase_commit\" option in the source code.\n But 2PC is work if the remote server's \"max_prepared_transactions\" \nis set\n to non zero value. It is correct work, isn't it?\n\n - some parameters are renamed or added in latest patches.\n max_prepared_foreign_transaction, max_prepared_transactions and so \non.\n\n - typo: froeign_transaction_resolver_timeout\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 17 Jun 2020 09:01:09 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 8:06 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> >> > I think the problem mentioned above can occur with this as well or if\n> >> > I am missing something then can you explain in further detail how it\n> >> > won't create problem in the scenario I have used above?\n> >>\n> >> So the problem you mentioned above is like this? (S1/S2 denotes\n> >> transactions (sessions), N1/N2 is the postgreSQL servers). Since S1\n> >> already committed on N1, S2 sees the row on N1. However S2 does not\n> >> see the row on N2 since S1 has not committed on N2 yet.\n> >>\n> >\n> > Yeah, something on these lines but S2 can execute the query on N1\n> > directly which should fetch the data from both N1 and N2.\n>\n> The algorythm assumes that any client should access database through a\n> middle ware. Such direct access is prohibited.\n>\n\nokay, so it seems we need few things which middleware (Pangea) expects\nif we have to follow the design of paper.\n\n> > Even if\n> > there is a solution using REPEATABLE READ isolation level we might not\n> > prefer to use that as the only level for distributed transactions, it\n> > might be too costly but let us first see how does it solve the\n> > problem?\n>\n> The paper extends Snapshot Isolation (SI, which is same as our\n> REPEATABLE READ isolation level) to \"Global Snapshot Isolation\", GSI).\n> I think GSI will solve the problem (atomic visibility) we are\n> discussing.\n>\n> Unlike READ COMMITTED, REPEATABLE READ acquires snapshot at the time\n> when the first command is executed in a transaction (READ COMMITTED\n> acquires a snapshot at each command in a transaction). Pangea controls\n> the timing of the snapshot acquisition on pair of transactions\n> (S1/N1,N2 or S2/N1,N2) so that each pair acquires the same\n> snapshot. To achieve this, while some transactions are trying to\n> acquire snapshot, any commit operation should be postponed. Likewise\n> any snapshot acquisition should wait until any in progress commit\n> operations are finished (see Algorithm I to III in the paper for more\n> details).\n>\n\nI haven't read the paper completely but it sounds quite restrictive\n(like both commits and snapshots need to wait). Another point is that\ndo we want some middleware involved in the solution? The main thing\nI was looking into at this stage is do we think that the current\nimplementation proposed by the patch for 2PC is generic enough that we\nwould be later able to integrate the solution for atomic visibility?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:33:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, 17 Jun 2020 at 09:01, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> > I've attached the new version patch set. 0006 is a separate patch\n> > which introduces 'prefer' mode to foreign_twophase_commit.\n>\n> I hope we can use this feature. Thank you for making patches and\n> discussions.\n> I'm currently understanding the logic and found some minor points to be\n> fixed.\n>\n> I'm sorry if my understanding is wrong.\n>\n> * The v22 patches need rebase as they can't apply to the current master.\n>\n> * FdwXactAtomicCommitParticipants said in\n> src/backend/access/fdwxact/README\n> is not implemented. Is FdwXactParticipants right?\n\nRight.\n\n>\n> * A following comment says that this code is for \"One-phase\",\n> but second argument of FdwXactParticipantEndTransaction() describes\n> this code is not \"onephase\".\n>\n> AtEOXact_FdwXact() in fdwxact.c\n> /* One-phase rollback foreign transaction */\n> FdwXactParticipantEndTransaction(fdw_part, false, false);\n>\n> static void\n> FdwXactParticipantEndTransaction(FdwXactParticipant *fdw_part, bool\n> onephase,\n> bool for_commit)\n>\n> * \"two_phase_commit\" option is mentioned in postgres-fdw.sgml,\n> but I can't find related code.\n>\n> * resolver.c comments have the sentence\n> containing two blanks.(Emergency Termination)\n>\n> * There are some inconsistency with PostgreSQL wiki.\n> https://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n>\n> I understand it's difficult to keep consistency, I think it's ok to\n> fix later\n> when these patches almost be able to be committed.\n>\n> - I can't find \"two_phase_commit\" option in the source code.\n> But 2PC is work if the remote server's \"max_prepared_transactions\"\n> is set\n> to non zero value. It is correct work, isn't it?\n\nYes. I had removed two_phase_commit option from postgres_fdw.\nCurrently, postgres_fdw uses 2pc when 2pc is required. Therefore,\nmax_prepared_transactions needs to be set to more than one, as you\nmentioned.\n\n>\n> - some parameters are renamed or added in latest patches.\n> max_prepared_foreign_transaction, max_prepared_transactions and so\n> on.\n>\n> - typo: froeign_transaction_resolver_timeout\n>\n\nThank you for your review! I've incorporated your comments on the\nlocal branch. I'll share the latest version patch.\n\nAlso, I've updated the wiki page. I'll try to keep the wiki page up-to-date.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jun 2020 14:07:16 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 6:43 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Jun 16, 2020 at 3:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Is there some mapping between GXID and XIDs allocated for each node or\n> > will each node use the GXID as XID to modify the data? Are we fine\n> > with parking the work for global snapshots and atomic visibility to a\n> > separate patch and just proceed with the design proposed by this\n> > patch?\n>\n> Distributed transaction involves, atomic commit, atomic visibility\n> and global consistency. 2PC is the only practical solution for atomic\n> commit. There are some improvements over 2PC but those are add ons to\n> the basic 2PC, which is what this patch provides. Atomic visibility\n> and global consistency however have alternative solutions but all of\n> those solutions require 2PC to be supported. Each of those are large\n> pieces of work and trying to get everything in may not work. Once we\n> have basic 2PC in place, there will be a ground to experiment with\n> solutions for global consistency and atomic visibility. If we manage\n> to do it right, we could make it pluggable as well.\n>\n\nI think it is easier said than done. If you want to make it pluggable\nor want alternative solutions to adapt the 2PC support provided by us\nwe should have some idea how those alternative solutions look like. I\nam not telling we have to figure out each and every detail of those\nsolutions but without paying any attention to the high-level picture\nwe might end up doing something for 2PC here which either needs a lot\nof modifications or might need a design change which would be bad.\nBasically, if we later decide to use something like Global Xid to\nachieve other features then what we are doing here might not work.\n\nI think it is a good idea to complete the work in pieces where each\npiece is useful on its own but without having clarity on the overall\nsolution that could be a recipe for disaster. It is possible that you\nhave some idea in your mind where you can see clearly how this piece\nof work can fit in the bigger picture but it is not very apparent to\nothers or doesn't seem to be documented anywhere.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:49:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "> okay, so it seems we need few things which middleware (Pangea) expects\n> if we have to follow the design of paper.\n\nYes.\n\n> I haven't read the paper completely but it sounds quite restrictive\n> (like both commits and snapshots need to wait).\n\nMaybe. There is a performance evaluation in the paper. You might want\nto take a look at it.\n\n> Another point is that\n> do we want some middleware involved in the solution? The main thing\n> I was looking into at this stage is do we think that the current\n> implementation proposed by the patch for 2PC is generic enough that we\n> would be later able to integrate the solution for atomic visibility?\n\nMy concern is, FDW+2PC without atomic visibility could lead to data\ninconsistency among servers in some cases. If my understanding is\ncorrect, FDW+2PC (without atomic visibility) cannot prevent data\ninconsistency in the case below. Initially table t1 has only one row\nwith i = 0 on both N1 and N2. By executing S1 and S2 concurrently, t1\nnow has different value of i, 0 and 1.\n\nS1/N1: DROP TABLE t1;\nDROP TABLE\nS1/N1: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N1: INSERT INTO t1 VALUES(0);\nINSERT 0 1\nS1/N2: DROP TABLE t1;\nDROP TABLE\nS1/N2: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N2: INSERT INTO t1 VALUES(0);\nINSERT 0 1\nS1/N1: BEGIN;\nBEGIN\nS1/N2: BEGIN;\nBEGIN\nS1/N1: UPDATE t1 SET i = i + 1;\t-- i = 1\nUPDATE 1\nS1/N2: UPDATE t1 SET i = i + 1; -- i = 1\nUPDATE 1\nS1/N1: PREPARE TRANSACTION 's1n1';\nPREPARE TRANSACTION\nS1/N1: COMMIT PREPARED 's1n1';\nCOMMIT PREPARED\nS2/N1: BEGIN;\nBEGIN\nS2/N2: BEGIN;\nBEGIN\nS2/N2: DELETE FROM t1 WHERE i = 1;\nDELETE 0\nS2/N1: DELETE FROM t1 WHERE i = 1;\nDELETE 1\nS1/N2: PREPARE TRANSACTION 's1n2';\nPREPARE TRANSACTION\nS2/N1: PREPARE TRANSACTION 's2n1';\nPREPARE TRANSACTION\nS2/N2: PREPARE TRANSACTION 's2n2';\nPREPARE TRANSACTION\nS1/N2: COMMIT PREPARED 's1n2';\nCOMMIT PREPARED\nS2/N1: COMMIT PREPARED 's2n1';\nCOMMIT PREPARED\nS2/N2: COMMIT PREPARED 's2n2';\nCOMMIT PREPARED\nS2/N1: SELECT * FROM t1;\n i \n---\n(0 rows)\n\nS2/N2: SELECT * FROM t1;\n i \n---\n 1\n(1 row)\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 18 Jun 2020 08:31:20 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Thu, 18 Jun 2020 at 08:31, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> > okay, so it seems we need few things which middleware (Pangea) expects\n> > if we have to follow the design of paper.\n>\n> Yes.\n>\n> > I haven't read the paper completely but it sounds quite restrictive\n> > (like both commits and snapshots need to wait).\n>\n> Maybe. There is a performance evaluation in the paper. You might want\n> to take a look at it.\n>\n> > Another point is that\n> > do we want some middleware involved in the solution? The main thing\n> > I was looking into at this stage is do we think that the current\n> > implementation proposed by the patch for 2PC is generic enough that we\n> > would be later able to integrate the solution for atomic visibility?\n>\n> My concern is, FDW+2PC without atomic visibility could lead to data\n> inconsistency among servers in some cases. If my understanding is\n> correct, FDW+2PC (without atomic visibility) cannot prevent data\n> inconsistency in the case below. Initially table t1 has only one row\n> with i = 0 on both N1 and N2. By executing S1 and S2 concurrently, t1\n> now has different value of i, 0 and 1.\n\nIIUC the following sequence won't happen because COMMIT PREPARED\n's1n1' cannot be executed before PREPARE TRANSACTION 's1n2'. But as\nyou mentioned, we cannot prevent data inconsistency even with FDW+2PC\ne.g., when S2 starts a transaction between COMMIT PREPARED on N1 and\nCOMMIT PREPARED on N2 by S1. The point is this data inconsistency is\nlead by an inconsistent read but not by an inconsistent commit\nresults. I think there are kinds of possibilities causing data\ninconsistency but atomic commit and atomic visibility eliminate\ndifferent possibilities. We can eliminate all possibilities of data\ninconsistency only after we support 2PC and globally MVCC.\n\n>\n> S1/N1: DROP TABLE t1;\n> DROP TABLE\n> S1/N1: CREATE TABLE t1(i int);\n> CREATE TABLE\n> S1/N1: INSERT INTO t1 VALUES(0);\n> INSERT 0 1\n> S1/N2: DROP TABLE t1;\n> DROP TABLE\n> S1/N2: CREATE TABLE t1(i int);\n> CREATE TABLE\n> S1/N2: INSERT INTO t1 VALUES(0);\n> INSERT 0 1\n> S1/N1: BEGIN;\n> BEGIN\n> S1/N2: BEGIN;\n> BEGIN\n> S1/N1: UPDATE t1 SET i = i + 1; -- i = 1\n> UPDATE 1\n> S1/N2: UPDATE t1 SET i = i + 1; -- i = 1\n> UPDATE 1\n> S1/N1: PREPARE TRANSACTION 's1n1';\n> PREPARE TRANSACTION\n> S1/N1: COMMIT PREPARED 's1n1';\n> COMMIT PREPARED\n> S2/N1: BEGIN;\n> BEGIN\n> S2/N2: BEGIN;\n> BEGIN\n> S2/N2: DELETE FROM t1 WHERE i = 1;\n> DELETE 0\n> S2/N1: DELETE FROM t1 WHERE i = 1;\n> DELETE 1\n> S1/N2: PREPARE TRANSACTION 's1n2';\n> PREPARE TRANSACTION\n> S2/N1: PREPARE TRANSACTION 's2n1';\n> PREPARE TRANSACTION\n> S2/N2: PREPARE TRANSACTION 's2n2';\n> PREPARE TRANSACTION\n> S1/N2: COMMIT PREPARED 's1n2';\n> COMMIT PREPARED\n> S2/N1: COMMIT PREPARED 's2n1';\n> COMMIT PREPARED\n> S2/N2: COMMIT PREPARED 's2n2';\n> COMMIT PREPARED\n> S2/N1: SELECT * FROM t1;\n> i\n> ---\n> (0 rows)\n>\n> S2/N2: SELECT * FROM t1;\n> i\n> ---\n> 1\n> (1 row)\n>\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 18 Jun 2020 10:57:12 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": ">> My concern is, FDW+2PC without atomic visibility could lead to data\n>> inconsistency among servers in some cases. If my understanding is\n>> correct, FDW+2PC (without atomic visibility) cannot prevent data\n>> inconsistency in the case below. Initially table t1 has only one row\n>> with i = 0 on both N1 and N2. By executing S1 and S2 concurrently, t1\n>> now has different value of i, 0 and 1.\n> \n> IIUC the following sequence won't happen because COMMIT PREPARED\n> 's1n1' cannot be executed before PREPARE TRANSACTION 's1n2'.\n\nYou are right.\n\n> But as\n> you mentioned, we cannot prevent data inconsistency even with FDW+2PC\n> e.g., when S2 starts a transaction between COMMIT PREPARED on N1 and\n> COMMIT PREPARED on N2 by S1.\n\nOk, example updated.\n\nS1/N1: DROP TABLE t1;\nDROP TABLE\nS1/N1: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N1: INSERT INTO t1 VALUES(0);\nINSERT 0 1\nS1/N2: DROP TABLE t1;\nDROP TABLE\nS1/N2: CREATE TABLE t1(i int);\nCREATE TABLE\nS1/N2: INSERT INTO t1 VALUES(0);\nINSERT 0 1\nS1/N1: BEGIN;\nBEGIN\nS1/N2: BEGIN;\nBEGIN\nS1/N1: UPDATE t1 SET i = i + 1;\t-- i = 1\nUPDATE 1\nS1/N2: UPDATE t1 SET i = i + 1; -- i = 1\nUPDATE 1\nS2/N1: BEGIN;\nBEGIN\nS2/N2: BEGIN;\nBEGIN\nS1/N1: PREPARE TRANSACTION 's1n1';\nPREPARE TRANSACTION\nS1/N2: PREPARE TRANSACTION 's1n2';\nPREPARE TRANSACTION\nS2/N1: PREPARE TRANSACTION 's2n1';\nPREPARE TRANSACTION\nS2/N2: PREPARE TRANSACTION 's2n2';\nPREPARE TRANSACTION\nS1/N1: COMMIT PREPARED 's1n1';\nCOMMIT PREPARED\nS2/N1: DELETE FROM t1 WHERE i = 1;\nDELETE 1\nS2/N2: DELETE FROM t1 WHERE i = 1;\nDELETE 0\nS1/N2: COMMIT PREPARED 's1n2';\nCOMMIT PREPARED\nS2/N1: COMMIT PREPARED 's2n1';\nCOMMIT PREPARED\nS2/N2: COMMIT PREPARED 's2n2';\nCOMMIT PREPARED\nS2/N1: SELECT * FROM t1;\n i \n---\n(0 rows)\n\nS2/N2: SELECT * FROM t1;\n i \n---\n 1\n(1 row)\n\n> The point is this data inconsistency is\n> lead by an inconsistent read but not by an inconsistent commit\n> results. I think there are kinds of possibilities causing data\n> inconsistency but atomic commit and atomic visibility eliminate\n> different possibilities. We can eliminate all possibilities of data\n> inconsistency only after we support 2PC and globally MVCC.\n\nIMO any permanent data inconsistency is a serious problem for users no\nmatter what the technical reasons are.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:30:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 5:01 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> > Another point is that\n> > do we want some middleware involved in the solution? The main thing\n> > I was looking into at this stage is do we think that the current\n> > implementation proposed by the patch for 2PC is generic enough that we\n> > would be later able to integrate the solution for atomic visibility?\n>\n> My concern is, FDW+2PC without atomic visibility could lead to data\n> inconsistency among servers in some cases. If my understanding is\n> correct, FDW+2PC (without atomic visibility) cannot prevent data\n> inconsistency in the case below.\n>\n\nYou are right and we are not going to claim that after this feature is\ncommitted. This feature has independent use cases like it can allow\nparallel copy when foreign tables are involved once we have parallel\ncopy and surely there will be more. I think it is clear that we need\natomic visibility (some way to ensure global consistency) to avoid the\ndata inconsistency problems you and I are worried about and we can do\nthat as a separate patch but at this stage, it would be good if we can\nhave some high-level design of that as well so that if we need some\nadjustments in the design/implementation of this patch then we can do\nit now. I think there is some discussion on the other threads (like\n[1]) about the kind of stuff we are worried about which I need to\nfollow up on to study the impact.\n\nHaving said that, I don't think that is a reason to stop reviewing or\nworking on this patch.\n\n[1] - https://www.postgresql.org/message-id/flat/21BC916B-80A1-43BF-8650-3363CCDAE09C%40postgrespro.ru\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:09:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 04:09:56PM +0530, Amit Kapila wrote:\n> You are right and we are not going to claim that after this feature is\n> committed. This feature has independent use cases like it can allow\n> parallel copy when foreign tables are involved once we have parallel\n> copy and surely there will be more. I think it is clear that we need\n> atomic visibility (some way to ensure global consistency) to avoid the\n> data inconsistency problems you and I are worried about and we can do\n> that as a separate patch but at this stage, it would be good if we can\n> have some high-level design of that as well so that if we need some\n> adjustments in the design/implementation of this patch then we can do\n> it now. I think there is some discussion on the other threads (like\n> [1]) about the kind of stuff we are worried about which I need to\n> follow up on to study the impact.\n> \n> Having said that, I don't think that is a reason to stop reviewing or\n> working on this patch.\n\nI think our first step is to allow sharding to work on read-only\ndatabases, e.g. data warehousing. Read/write will require global\nsnapshots. It is true that 2PC is limited usefulness without global\nsnapshots, because, by definition, systems using 2PC are read-write\nsystems. However, I can see cases where you are loading data into a\ndata warehouse but want 2PC so the systems remain consistent even if\nthere is a crash during loading.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 18 Jun 2020 09:19:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 6:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Jun 18, 2020 at 04:09:56PM +0530, Amit Kapila wrote:\n> > You are right and we are not going to claim that after this feature is\n> > committed. This feature has independent use cases like it can allow\n> > parallel copy when foreign tables are involved once we have parallel\n> > copy and surely there will be more. I think it is clear that we need\n> > atomic visibility (some way to ensure global consistency) to avoid the\n> > data inconsistency problems you and I are worried about and we can do\n> > that as a separate patch but at this stage, it would be good if we can\n> > have some high-level design of that as well so that if we need some\n> > adjustments in the design/implementation of this patch then we can do\n> > it now. I think there is some discussion on the other threads (like\n> > [1]) about the kind of stuff we are worried about which I need to\n> > follow up on to study the impact.\n> >\n> > Having said that, I don't think that is a reason to stop reviewing or\n> > working on this patch.\n>\n> I think our first step is to allow sharding to work on read-only\n> databases, e.g. data warehousing. Read/write will require global\n> snapshots. It is true that 2PC is limited usefulness without global\n> snapshots, because, by definition, systems using 2PC are read-write\n> systems. However, I can see cases where you are loading data into a\n> data warehouse but want 2PC so the systems remain consistent even if\n> there is a crash during loading.\n>\n\nFor sharding, just implementing 2PC without global consistency\nprovides limited functionality. But for general purpose federated\ndatabases 2PC serves an important functionality - atomic visibility.\nWhen PostgreSQL is used as one of the coordinators in a heterogeneous\nfederated database system, it's not expected to have global\nconsistency or even atomic visibility. But it needs a guarantee that\nonce a transaction commit, all its legs are committed. 2PC provides\nthat guarantee as long as the other databases keep their promise that\nprepared transactions will always get committed when requested so.\nSubtle to this is HA requirement from these databases as well. So the\nfunctionality provided by this patch is important outside the sharding\ncase as well.\n\nAs you said, even for a data warehousing application, there is some\nwrite in the form of loading/merging data. If that write happens\nacross multiple servers, we need atomic commit to be guaranteed. Some\nof these applications can work even if global consistency and atomic\nvisibility is guaranteed eventually.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 19 Jun 2020 15:35:55 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, 17 Jun 2020 at 14:07, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 17 Jun 2020 at 09:01, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> >\n> > > I've attached the new version patch set. 0006 is a separate patch\n> > > which introduces 'prefer' mode to foreign_twophase_commit.\n> >\n> > I hope we can use this feature. Thank you for making patches and\n> > discussions.\n> > I'm currently understanding the logic and found some minor points to be\n> > fixed.\n> >\n> > I'm sorry if my understanding is wrong.\n> >\n> > * The v22 patches need rebase as they can't apply to the current master.\n> >\n> > * FdwXactAtomicCommitParticipants said in\n> > src/backend/access/fdwxact/README\n> > is not implemented. Is FdwXactParticipants right?\n>\n> Right.\n>\n> >\n> > * A following comment says that this code is for \"One-phase\",\n> > but second argument of FdwXactParticipantEndTransaction() describes\n> > this code is not \"onephase\".\n> >\n> > AtEOXact_FdwXact() in fdwxact.c\n> > /* One-phase rollback foreign transaction */\n> > FdwXactParticipantEndTransaction(fdw_part, false, false);\n> >\n> > static void\n> > FdwXactParticipantEndTransaction(FdwXactParticipant *fdw_part, bool\n> > onephase,\n> > bool for_commit)\n> >\n> > * \"two_phase_commit\" option is mentioned in postgres-fdw.sgml,\n> > but I can't find related code.\n> >\n> > * resolver.c comments have the sentence\n> > containing two blanks.(Emergency Termination)\n> >\n> > * There are some inconsistency with PostgreSQL wiki.\n> > https://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n> >\n> > I understand it's difficult to keep consistency, I think it's ok to\n> > fix later\n> > when these patches almost be able to be committed.\n> >\n> > - I can't find \"two_phase_commit\" option in the source code.\n> > But 2PC is work if the remote server's \"max_prepared_transactions\"\n> > is set\n> > to non zero value. It is correct work, isn't it?\n>\n> Yes. I had removed two_phase_commit option from postgres_fdw.\n> Currently, postgres_fdw uses 2pc when 2pc is required. Therefore,\n> max_prepared_transactions needs to be set to more than one, as you\n> mentioned.\n>\n> >\n> > - some parameters are renamed or added in latest patches.\n> > max_prepared_foreign_transaction, max_prepared_transactions and so\n> > on.\n> >\n> > - typo: froeign_transaction_resolver_timeout\n> >\n>\n> Thank you for your review! I've incorporated your comments on the\n> local branch. I'll share the latest version patch.\n>\n> Also, I've updated the wiki page. I'll try to keep the wiki page up-to-date.\n>\n\nI've attached the latest version patches. I've incorporated the review\ncomments I got so far and improved locking strategy.\n\nPlease review it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 23 Jun 2020 12:33:02 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jun 23, 2020 at 9:03 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n>\n> I've attached the latest version patches. I've incorporated the review\n> comments I got so far and improved locking strategy.\n>\n\nThanks for updating the patch.\n\n> Please review it.\n>\n\nI think at this stage it is important that we do some study of various\napproaches to achieve this work and come up with a comparison of the\npros and cons of each approach (a) what this patch provides, (b) what\nis implemented in Global Snapshots patch [1], (c) if possible, what is\nimplemented in Postgres-XL. I fear that if go too far in spending\neffort on this and later discovered that it can be better done via\nsome other available patch/work (maybe due to a reasons like that\napproach can easily extended to provide atomic visibility or the\ndesign is more robust, etc.) then it can lead to a lot of rework.\n\n[1] - https://www.postgresql.org/message-id/20200622150636.GB28999%40momjian.us\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jun 2020 09:56:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 23 Jun 2020 at 13:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 23, 2020 at 9:03 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> >\n> > I've attached the latest version patches. I've incorporated the review\n> > comments I got so far and improved locking strategy.\n> >\n>\n> Thanks for updating the patch.\n>\n> > Please review it.\n> >\n>\n> I think at this stage it is important that we do some study of various\n> approaches to achieve this work and come up with a comparison of the\n> pros and cons of each approach (a) what this patch provides, (b) what\n> is implemented in Global Snapshots patch [1], (c) if possible, what is\n> implemented in Postgres-XL. I fear that if go too far in spending\n> effort on this and later discovered that it can be better done via\n> some other available patch/work (maybe due to a reasons like that\n> approach can easily extended to provide atomic visibility or the\n> design is more robust, etc.) then it can lead to a lot of rework.\n\nYeah, I have no objection to that plan but I think we also need to\nkeep in mind that (b), (c), and whatever we are thinking about global\nconsistency are talking about only PostgreSQL (and postgres_fdw). On\nthe other hand, this patch needs to implement the feature that can\nresolve the atomic commit problem more generically, because the\nforeign server might be using oracle_fdw, mysql_fdw, or other FDWs\nconnecting database systems supporting 2PC.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Jun 2020 14:19:37 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 26, 2020 at 10:50 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 23 Jun 2020 at 13:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I think at this stage it is important that we do some study of various\n> > approaches to achieve this work and come up with a comparison of the\n> > pros and cons of each approach (a) what this patch provides, (b) what\n> > is implemented in Global Snapshots patch [1], (c) if possible, what is\n> > implemented in Postgres-XL. I fear that if go too far in spending\n> > effort on this and later discovered that it can be better done via\n> > some other available patch/work (maybe due to a reasons like that\n> > approach can easily extended to provide atomic visibility or the\n> > design is more robust, etc.) then it can lead to a lot of rework.\n>\n> Yeah, I have no objection to that plan but I think we also need to\n> keep in mind that (b), (c), and whatever we are thinking about global\n> consistency are talking about only PostgreSQL (and postgres_fdw).\n>\n\nI think we should explore if those approaches could be extended for\nFDWs and if not then that could be considered as a disadvantage of\nthat approach.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jun 2020 14:13:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": ">> The point is this data inconsistency is\n>> lead by an inconsistent read but not by an inconsistent commit\n>> results. I think there are kinds of possibilities causing data\n>> inconsistency but atomic commit and atomic visibility eliminate\n>> different possibilities. We can eliminate all possibilities of data\n>> inconsistency only after we support 2PC and globally MVCC.\n> \n> IMO any permanent data inconsistency is a serious problem for users no\n> matter what the technical reasons are.\n\nI have incorporated \"Pangea\" algorithm into Pgpool-II to implement the\natomic visibility. In a test below I have two PostgreSQL servers\n(stock v12), server0 (port 11002) and server1 (port\n11003). default_transaction_isolation was set to 'repeatable read' on\nboth PostgreSQL, this is required by Pangea. Pgpool-II replicates\nwrite queries and send them to both server0 and server1. There are two\ntables \"t1\" (having only 1 integer column \"i\") and \"log\" (having only\n1 integer c column \"i\"). I have run following script\n(inconsistency1.sql) via pgbench:\n\nBEGIN;\nUPDATE t1 SET i = i + 1;\nEND;\n\nlike: pgbench -n -c 1 -T 30 -f inconsistency1.sql\n\nIn the moment I have run another session from pgbench concurrently:\n\nBEGIN;\nINSERT INTO log SELECT * FROM t1;\nEND;\n\npgbench -n -c 1 -T 30 -f inconsistency2.sql\n\nAfter finishing those two pgbench runs, I ran following COPY to see if\ncontents of table \"log\" are identical in server0 and server1:\npsql -p 11002 -c \"\\copy log to '11002.txt'\"\npsql -p 11003 -c \"\\copy log to '11003.txt'\"\ncmp 11002.txt 11003.txt\n\nThe new Pgpool-II incorporating Pangea showed that 11002.txt and\n11003.txt are identical as expected. This indicates that the atomic\nvisibility are kept.\n\nOn the other hand Pgpool-II which does not implement Pangea showed\ndifferences in those files.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 01 Jul 2020 13:43:31 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "> I've attached the latest version patches. I've incorporated the review\n> comments I got so far and improved locking strategy.\n\nThanks for updating the patch!\nI have three questions about the v23 patches.\n\n\n1. messages related to user canceling\n\nIn my understanding, there are two messages\nwhich can be output when a user cancels the COMMIT command.\n\nA. When prepare is failed, the output shows that\n committed locally but some error is occurred.\n\n```\npostgres=*# COMMIT;\n^CCancel request sent\nWARNING: canceling wait for resolving foreign transaction due to user \nrequest\nDETAIL: The transaction has already committed locally, but might not \nhave been committed on the foreign server.\nERROR: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nCONTEXT: remote SQL command: PREPARE TRANSACTION \n'fx_1020791818_519_16399_10'\n```\n\nB. When prepare is succeeded,\n the output show that committed locally.\n\n```\npostgres=*# COMMIT;\n^CCancel request sent\nWARNING: canceling wait for resolving foreign transaction due to user \nrequest\nDETAIL: The transaction has already committed locally, but might not \nhave been committed on the foreign server.\nCOMMIT\n```\n\nIn case of A, I think that \"committed locally\" message can confuse user.\nBecause although messages show committed but the transaction is \n\"ABORTED\".\n\nI think \"committed\" message means that \"ABORT\" is committed locally.\nBut is there a possibility of misunderstanding?\n\nIn case of A, it's better to change message for user friendly, isn't it?\n\n\n2. typo\n\nIs trasnactions in fdwxact.c typo?\n\n\n3. FdwXactGetWaiter in fdwxact.c return unused value\n\nFdwXactGetWaiter is called in FXRslvLoop function.\nIt returns *waitXid_p, but FXRslvloop doesn't seem to\nuse *waitXid_p. Do we need to return it?\n\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 14 Jul 2020 09:08:33 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/07/14 9:08, Masahiro Ikeda wrote:\n>> I've attached the latest version patches. I've incorporated the review\n>> comments I got so far and improved locking strategy.\n> \n> Thanks for updating the patch!\n\n+1\nI'm interested in these patches and now studying them. While checking\nthe behaviors of the patched PostgreSQL, I got three comments.\n\n1. We can access to the foreign table even during recovery in the HEAD.\nBut in the patched version, when I did that, I got the following error.\nIs this intentional?\n\nERROR: cannot assign TransactionIds during recovery\n\n2. With the patch, when INSERT/UPDATE/DELETE are executed both in\nlocal and remote servers, 2PC is executed at the commit phase. But\nwhen write SQL (e.g., TRUNCATE) except INSERT/UPDATE/DELETE are\nexecuted in local and INSERT/UPDATE/DELETE are executed in remote,\n2PC is NOT executed. Is this safe?\n\n3. XACT_FLAGS_WROTENONTEMPREL is set when INSERT/UPDATE/DELETE\nare executed. But it's not reset even when those queries are canceled by\nROLLBACK TO SAVEPOINT. This may cause unnecessary 2PC at the commit phase.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 14 Jul 2020 11:19:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "> I've attached the latest version patches. I've incorporated the review\n> comments I got so far and improved locking strategy.\n\nI want to ask a question about streaming replication with 2PC.\nAre you going to support 2PC with streaming replication?\n\nI tried streaming replication using v23 patches.\nI confirm that 2PC works with streaming replication,\nwhich there are primary/standby coordinator.\n\nBut, in my understanding, the WAL of \"PREPARE\" and\n\"COMMIT/ABORT PREPARED\" can't be replicated to the standby server in \nsync.\n\nIf this is right, the unresolved transaction can be occurred.\n\nFor example,\n\n1. PREPARE is done\n2. crash primary before the WAL related to PREPARE is\n replicated to the standby server\n3. promote standby server // but can't execute \"ABORT PREPARED\"\n\nIn above case, the remote server has the unresolved transaction.\nCan we solve this problem to support in-sync replication?\n\nBut, I think some users use async replication for performance.\nDo we need to document the limitation or make another solution?\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 14 Jul 2020 17:24:58 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 14 Jul 2020 at 09:08, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> > I've attached the latest version patches. I've incorporated the review\n> > comments I got so far and improved locking strategy.\n>\n> Thanks for updating the patch!\n> I have three questions about the v23 patches.\n>\n>\n> 1. messages related to user canceling\n>\n> In my understanding, there are two messages\n> which can be output when a user cancels the COMMIT command.\n>\n> A. When prepare is failed, the output shows that\n> committed locally but some error is occurred.\n>\n> ```\n> postgres=*# COMMIT;\n> ^CCancel request sent\n> WARNING: canceling wait for resolving foreign transaction due to user\n> request\n> DETAIL: The transaction has already committed locally, but might not\n> have been committed on the foreign server.\n> ERROR: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> CONTEXT: remote SQL command: PREPARE TRANSACTION\n> 'fx_1020791818_519_16399_10'\n> ```\n>\n> B. When prepare is succeeded,\n> the output show that committed locally.\n>\n> ```\n> postgres=*# COMMIT;\n> ^CCancel request sent\n> WARNING: canceling wait for resolving foreign transaction due to user\n> request\n> DETAIL: The transaction has already committed locally, but might not\n> have been committed on the foreign server.\n> COMMIT\n> ```\n>\n> In case of A, I think that \"committed locally\" message can confuse user.\n> Because although messages show committed but the transaction is\n> \"ABORTED\".\n>\n> I think \"committed\" message means that \"ABORT\" is committed locally.\n> But is there a possibility of misunderstanding?\n\nNo, you're right. I'll fix it in the next version patch.\n\nI think synchronous replication also has the same problem. It says\n\"the transaction has already committed\" but it's not true when\nexecuting ROLLBACK PREPARED.\n\nBTW how did you test the case (A)? It says canceling wait for foreign\ntransaction resolution but the remote SQL command is PREPARE\nTRANSACTION.\n\n>\n> In case of A, it's better to change message for user friendly, isn't it?\n>\n>\n> 2. typo\n>\n> Is trasnactions in fdwxact.c typo?\n>\n\nFixed.\n\n>\n> 3. FdwXactGetWaiter in fdwxact.c return unused value\n>\n> FdwXactGetWaiter is called in FXRslvLoop function.\n> It returns *waitXid_p, but FXRslvloop doesn't seem to\n> use *waitXid_p. Do we need to return it?\n\nRemoved.\n\nI've incorporated the above your comments in the local branch. I'll\npost the latest version patch after incorporating other comments soon.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jul 2020 15:06:15 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/07/15 15:06, Masahiko Sawada wrote:\n> On Tue, 14 Jul 2020 at 09:08, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>\n>>> I've attached the latest version patches. I've incorporated the review\n>>> comments I got so far and improved locking strategy.\n>>\n>> Thanks for updating the patch!\n>> I have three questions about the v23 patches.\n>>\n>>\n>> 1. messages related to user canceling\n>>\n>> In my understanding, there are two messages\n>> which can be output when a user cancels the COMMIT command.\n>>\n>> A. When prepare is failed, the output shows that\n>> committed locally but some error is occurred.\n>>\n>> ```\n>> postgres=*# COMMIT;\n>> ^CCancel request sent\n>> WARNING: canceling wait for resolving foreign transaction due to user\n>> request\n>> DETAIL: The transaction has already committed locally, but might not\n>> have been committed on the foreign server.\n>> ERROR: server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> CONTEXT: remote SQL command: PREPARE TRANSACTION\n>> 'fx_1020791818_519_16399_10'\n>> ```\n>>\n>> B. When prepare is succeeded,\n>> the output show that committed locally.\n>>\n>> ```\n>> postgres=*# COMMIT;\n>> ^CCancel request sent\n>> WARNING: canceling wait for resolving foreign transaction due to user\n>> request\n>> DETAIL: The transaction has already committed locally, but might not\n>> have been committed on the foreign server.\n>> COMMIT\n>> ```\n>>\n>> In case of A, I think that \"committed locally\" message can confuse user.\n>> Because although messages show committed but the transaction is\n>> \"ABORTED\".\n>>\n>> I think \"committed\" message means that \"ABORT\" is committed locally.\n>> But is there a possibility of misunderstanding?\n> \n> No, you're right. I'll fix it in the next version patch.\n> \n> I think synchronous replication also has the same problem. It says\n> \"the transaction has already committed\" but it's not true when\n> executing ROLLBACK PREPARED.\n\nYes. Also the same message is logged when executing PREPARE TRANSACTION.\nMaybe it should be changed to \"the transaction has already prepared\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 15 Jul 2020 17:29:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On 2020-07-15 15:06, Masahiko Sawada wrote:\n> On Tue, 14 Jul 2020 at 09:08, Masahiro Ikeda <ikedamsh@oss.nttdata.com> \n> wrote:\n>> \n>> > I've attached the latest version patches. I've incorporated the review\n>> > comments I got so far and improved locking strategy.\n>> \n>> Thanks for updating the patch!\n>> I have three questions about the v23 patches.\n>> \n>> \n>> 1. messages related to user canceling\n>> \n>> In my understanding, there are two messages\n>> which can be output when a user cancels the COMMIT command.\n>> \n>> A. When prepare is failed, the output shows that\n>> committed locally but some error is occurred.\n>> \n>> ```\n>> postgres=*# COMMIT;\n>> ^CCancel request sent\n>> WARNING: canceling wait for resolving foreign transaction due to user\n>> request\n>> DETAIL: The transaction has already committed locally, but might not\n>> have been committed on the foreign server.\n>> ERROR: server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> CONTEXT: remote SQL command: PREPARE TRANSACTION\n>> 'fx_1020791818_519_16399_10'\n>> ```\n>> \n>> B. When prepare is succeeded,\n>> the output show that committed locally.\n>> \n>> ```\n>> postgres=*# COMMIT;\n>> ^CCancel request sent\n>> WARNING: canceling wait for resolving foreign transaction due to user\n>> request\n>> DETAIL: The transaction has already committed locally, but might not\n>> have been committed on the foreign server.\n>> COMMIT\n>> ```\n>> \n>> In case of A, I think that \"committed locally\" message can confuse \n>> user.\n>> Because although messages show committed but the transaction is\n>> \"ABORTED\".\n>> \n>> I think \"committed\" message means that \"ABORT\" is committed locally.\n>> But is there a possibility of misunderstanding?\n> \n> No, you're right. I'll fix it in the next version patch.\n> \n> I think synchronous replication also has the same problem. It says\n> \"the transaction has already committed\" but it's not true when\n> executing ROLLBACK PREPARED.\n\nThanks for replying and sharing the synchronous replication problem.\n\n> BTW how did you test the case (A)? It says canceling wait for foreign\n> transaction resolution but the remote SQL command is PREPARE\n> TRANSACTION.\n\nI think the timing of failure is important for 2PC test.\nSince I don't have any good solution to simulate those flexibly,\nI use the GDB debugger.\n\nThe message of the case (A) is sent\nafter performing the following operations.\n\n1. Attach the debugger to a backend process.\n2. Set a breakpoint to PreCommit_FdwXact() in CommitTransaction().\n // Before PREPARE.\n3. Execute \"BEGIN\" and insert data into two remote foreign tables.\n4. Issue a \"Commit\" command\n5. The backend process stops at the breakpoint.\n6. Stop a remote foreign server.\n7. Detach the debugger.\n // The backend continues and prepare is failed. TR try to abort all \nremote txs.\n // It's unnecessary to resolve remote txs which prepare is failed, \nisn't it?\n8. Send a cancel request.\n\n\nBTW, I concerned that how to test the 2PC patches.\nThere are many failure patterns, such as failure timing,\nfailure server/nw (and unexpected recovery), and those combinations...\n\nThough it's best to test those failure patterns automatically,\nI have no idea for now, so I manually check some patterns.\n\n\n> I've incorporated the above your comments in the local branch. I'll\n> post the latest version patch after incorporating other comments soon.\n\nOK, Thanks.\n\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 15 Jul 2020 20:58:13 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 14 Jul 2020 at 17:24, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> > I've attached the latest version patches. I've incorporated the review\n> > comments I got so far and improved locking strategy.\n>\n> I want to ask a question about streaming replication with 2PC.\n> Are you going to support 2PC with streaming replication?\n>\n> I tried streaming replication using v23 patches.\n> I confirm that 2PC works with streaming replication,\n> which there are primary/standby coordinator.\n>\n> But, in my understanding, the WAL of \"PREPARE\" and\n> \"COMMIT/ABORT PREPARED\" can't be replicated to the standby server in\n> sync.\n>\n> If this is right, the unresolved transaction can be occurred.\n>\n> For example,\n>\n> 1. PREPARE is done\n> 2. crash primary before the WAL related to PREPARE is\n> replicated to the standby server\n> 3. promote standby server // but can't execute \"ABORT PREPARED\"\n>\n> In above case, the remote server has the unresolved transaction.\n> Can we solve this problem to support in-sync replication?\n>\n> But, I think some users use async replication for performance.\n> Do we need to document the limitation or make another solution?\n>\n\nIIUC with synchronous replication, we can guarantee that WAL records\nare written on both primary and replicas when the client got an\nacknowledgment of commit. We don't replicate each WAL records\ngenerated during transaction one by one in sync. In the case you\ndescribed, the client will get an error due to the server crash.\nTherefore I think the user cannot expect WAL records generated so far\nhas been replicated. The same issue could happen also when the user\nexecutes PREPARE TRANSACTION and the server crashes. To prevent this\nissue, I think we would need to send each WAL records in sync but I'm\nnot sure it's reasonable behavior, and as long as we write WAL in the\nlocal and then send it to replicas we would need a smart mechanism to\nprevent this situation.\n\nRelated to the pointing out by Ikeda-san, I realized that with the\ncurrent patch the backend waits for synchronous replication and then\nwaits for foreign transaction resolution. But it should be reversed.\nOtherwise, it could lead to data loss even when the client got an\nacknowledgment of commit. Also, when the user is using both atomic\ncommit and synchronous replication and wants to cancel waiting, he/she\nwill need to press ctl-c twice with the current patch, which also\nshould be fixed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jul 2020 13:16:50 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi Sawada san,\r\n\r\n\r\nI'm reviewing this patch series, and let me give some initial comments and questions. I'm looking at this with a hope that this will be useful purely as a FDW enhancement for our new use cases, regardless of whether the FDW will be used for Postgres scale-out.\r\n\r\nI don't think it's necessarily required to combine 2PC with the global visibility. X/Open XA specification only handles the atomic commit. The only part in the XA specification that refers to global visibility is the following:\r\n\r\n\r\n[Quote from XA specification]\r\n--------------------------------------------------\r\n2.3.2 Protocol Optimisations \r\n・ Read-only \r\nAn RM can respond to the TM’s prepare request by asserting that the RM was not \r\nasked to update shared resources in this transaction branch. This response \r\nconcludes the RM’s involvement in the transaction; the Phase 2 dialogue between \r\nthe TM and this RM does not occur. The TM need not stably record, in its list of \r\nparticipating RMs, an RM that asserts a read-only role in the global transaction. \r\n\r\nHowever, if the RM returns the read-only optimisation before all work on the global \r\ntransaction is prepared, global serialisability1 cannot be guaranteed. This is because \r\nthe RM may release transaction context, such as read locks, before all application \r\nactivity for that global transaction is finished. \r\n\r\n1. \r\nSerialisability is a property of a set of concurrent transactions. For a serialisable set of transactions, at least one \r\nserial sequence of the transactions exists that produces identical results, with respect to shared resources, as does \r\nconcurrent execution of the transaction. \r\n--------------------------------------------------\r\n\r\n\r\n(1)\r\nDo other popular DBMSs (Oracle, MySQL, etc.) provide concrete functions that can be used for the new FDW commit/rollback/prepare API? I'm asking this to confirm that we really need to provide these functions, not as the transaction callbacks for postgres_fdw.\r\n\r\n\r\n(2)\r\nHow are data modifications tracked in local and remote transactions? 0001 seems to handle local INSERT/DELETE/UPDATE. Especially:\r\n\r\n* COPY FROM to local/remote tables/views.\r\n\r\n* User-defined function calls that modify data, e.g. SELECT func1() WHERE col = func2()\r\n\r\n\r\n(3)\r\nDoes the 2PC processing always go through the background worker?\r\nIs the group commit effective on the remote server? That is, PREPARE and COMMIT PREPARED issued from multiple remote sessions are written to WAL in batch?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 16 Jul 2020 04:53:38 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n> >> I've attached the latest version patches. I've incorporated the review\n> >> comments I got so far and improved locking strategy.\n> >\n> > Thanks for updating the patch!\n>\n> +1\n> I'm interested in these patches and now studying them. While checking\n> the behaviors of the patched PostgreSQL, I got three comments.\n\nThank you for testing this patch!\n\n>\n> 1. We can access to the foreign table even during recovery in the HEAD.\n> But in the patched version, when I did that, I got the following error.\n> Is this intentional?\n>\n> ERROR: cannot assign TransactionIds during recovery\n\nNo, it should be fixed. I'm going to fix this by not collecting\nparticipants for atomic commit during recovery.\n\n>\n> 2. With the patch, when INSERT/UPDATE/DELETE are executed both in\n> local and remote servers, 2PC is executed at the commit phase. But\n> when write SQL (e.g., TRUNCATE) except INSERT/UPDATE/DELETE are\n> executed in local and INSERT/UPDATE/DELETE are executed in remote,\n> 2PC is NOT executed. Is this safe?\n\nHmm, you're right. I think atomic commit must be used also when the\nuser executes other write SQLs such as TRUNCATE, COPY, CLUSTER, and\nCREATE TABLE on the local node.\n\n>\n> 3. XACT_FLAGS_WROTENONTEMPREL is set when INSERT/UPDATE/DELETE\n> are executed. But it's not reset even when those queries are canceled by\n> ROLLBACK TO SAVEPOINT. This may cause unnecessary 2PC at the commit phase.\n\nWill fix.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jul 2020 14:47:10 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On 2020-07-16 13:16, Masahiko Sawada wrote:\n> On Tue, 14 Jul 2020 at 17:24, Masahiro Ikeda <ikedamsh@oss.nttdata.com> \n> wrote:\n>> \n>> > I've attached the latest version patches. I've incorporated the review\n>> > comments I got so far and improved locking strategy.\n>> \n>> I want to ask a question about streaming replication with 2PC.\n>> Are you going to support 2PC with streaming replication?\n>> \n>> I tried streaming replication using v23 patches.\n>> I confirm that 2PC works with streaming replication,\n>> which there are primary/standby coordinator.\n>> \n>> But, in my understanding, the WAL of \"PREPARE\" and\n>> \"COMMIT/ABORT PREPARED\" can't be replicated to the standby server in\n>> sync.\n>> \n>> If this is right, the unresolved transaction can be occurred.\n>> \n>> For example,\n>> \n>> 1. PREPARE is done\n>> 2. crash primary before the WAL related to PREPARE is\n>> replicated to the standby server\n>> 3. promote standby server // but can't execute \"ABORT PREPARED\"\n>> \n>> In above case, the remote server has the unresolved transaction.\n>> Can we solve this problem to support in-sync replication?\n>> \n>> But, I think some users use async replication for performance.\n>> Do we need to document the limitation or make another solution?\n>> \n> \n> IIUC with synchronous replication, we can guarantee that WAL records\n> are written on both primary and replicas when the client got an\n> acknowledgment of commit. We don't replicate each WAL records\n> generated during transaction one by one in sync. In the case you\n> described, the client will get an error due to the server crash.\n> Therefore I think the user cannot expect WAL records generated so far\n> has been replicated. The same issue could happen also when the user\n> executes PREPARE TRANSACTION and the server crashes.\n\nThanks! I didn't noticed the behavior when a user executes PREPARE \nTRANSACTION is same.\n\nIIUC with 2PC, there is a different point between (1)PREPARE TRANSACTION \nand (2)2PC.\nThe point is that whether the client can know when the server crashed \nand it's global tx id.\n\nIf (1)PREPARE TRANSACTION is failed, it's ok the client execute same \ncommand\nbecause if the remote server is already prepared the command will be \nignored.\n\nBut, if (2)2PC is failed with coordinator crash, the client can't know \nwhat operations should be done.\n\nIf the old coordinator already executed PREPARED, there are some \ntransaction which should be ABORT PREPARED.\nBut if the PREPARED WAL is not sent to the standby, the new coordinator \ncan't execute ABORT PREPARED.\nAnd the client can't know which remote servers have PREPARED \ntransactions which should be ABORTED either.\n\nEven if the client can know that, only the old coordinator knows its \nglobal transaction id.\nOnly the database administrator can analyze the old coordinator's log\nand then execute the appropriate commands manually, right?\n\n\n> To prevent this\n> issue, I think we would need to send each WAL records in sync but I'm\n> not sure it's reasonable behavior, and as long as we write WAL in the\n> local and then send it to replicas we would need a smart mechanism to\n> prevent this situation.\n\nI agree. To send each 2PC WAL records in sync must be with a large \nperformance impact.\nAt least, we need to document the limitation and how to handle this \nsituation.\n\n\n> Related to the pointing out by Ikeda-san, I realized that with the\n> current patch the backend waits for synchronous replication and then\n> waits for foreign transaction resolution. But it should be reversed.\n> Otherwise, it could lead to data loss even when the client got an\n> acknowledgment of commit. Also, when the user is using both atomic\n> commit and synchronous replication and wants to cancel waiting, he/she\n> will need to press ctl-c twice with the current patch, which also\n> should be fixed.\n\nI'm sorry that I can't understood.\n\nIn my understanding, if COMMIT WAL is replicated to the standby in sync,\nthe standby server can resolve the transaction after crash recovery in \npromoted phase.\n\nIf reversed, there are some situation which can't guarantee atomic \ncommit.\nIn case that some foreign transaction resolutions are succeed but others \nare failed(and COMMIT WAL is not replicated),\nthe standby must ABORT PREPARED because the COMMIT WAL is not \nreplicated.\nThis means that some foreign transactions are COMMITE PREPARED executed \nby primary coordinator,\nother foreign transactions can be ABORT PREPARED executed by secondary \ncoordinator.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 17 Jul 2020 11:06:29 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 16 Jul 2020 at 13:53, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> Hi Sawada san,\n>\n>\n> I'm reviewing this patch series, and let me give some initial comments and questions. I'm looking at this with a hope that this will be useful purely as a FDW enhancement for our new use cases, regardless of whether the FDW will be used for Postgres scale-out.\n\nThank you for reviewing this patch!\n\nYes, this patch is trying to resolve the generic atomic commit problem\nw.r.t. FDW, and will be useful also for Postgres scale-out.\n\n>\n> I don't think it's necessarily required to combine 2PC with the global visibility. X/Open XA specification only handles the atomic commit. The only part in the XA specification that refers to global visibility is the following:\n>\n>\n> [Quote from XA specification]\n> --------------------------------------------------\n> 2.3.2 Protocol Optimisations\n> ・ Read-only\n> An RM can respond to the TM’s prepare request by asserting that the RM was not\n> asked to update shared resources in this transaction branch. This response\n> concludes the RM’s involvement in the transaction; the Phase 2 dialogue between\n> the TM and this RM does not occur. The TM need not stably record, in its list of\n> participating RMs, an RM that asserts a read-only role in the global transaction.\n>\n> However, if the RM returns the read-only optimisation before all work on the global\n> transaction is prepared, global serialisability1 cannot be guaranteed. This is because\n> the RM may release transaction context, such as read locks, before all application\n> activity for that global transaction is finished.\n>\n> 1.\n> Serialisability is a property of a set of concurrent transactions. For a serialisable set of transactions, at least one\n> serial sequence of the transactions exists that produces identical results, with respect to shared resources, as does\n> concurrent execution of the transaction.\n> --------------------------------------------------\n>\n\nAgreed.\n\n>\n> (1)\n> Do other popular DBMSs (Oracle, MySQL, etc.) provide concrete functions that can be used for the new FDW commit/rollback/prepare API? I'm asking this to confirm that we really need to provide these functions, not as the transaction callbacks for postgres_fdw.\n>\n\nI have briefly checked the only oracle_fdw but in general I think that\nif an existing FDW supports transaction begin, commit, and rollback,\nthese can be ported to new FDW transaction APIs easily.\n\nRegarding the comparison between FDW transaction APIs and transaction\ncallbacks, I think one of the benefits of providing FDW transaction\nAPIs is that the core is able to manage the status of foreign\ntransactions. We need to track the status of individual foreign\ntransactions to support atomic commit. If we use transaction callbacks\n(XactCallback) that many FDWs are using, I think we will end up\ncalling the transaction callback and leave the transaction work to\nFDWs, leading that the core is not able to know the return values of\nPREPARE TRANSACTION for example. We can add more arguments passed to\ntransaction callbacks to get the return value from FDWs but I don’t\nthink it’s a good idea as transaction callbacks are used not only by\nFDW but also other external modules.\n\n>\n> (2)\n> How are data modifications tracked in local and remote transactions? 0001 seems to handle local INSERT/DELETE/UPDATE. Especially:\n>\n> * COPY FROM to local/remote tables/views.\n>\n> * User-defined function calls that modify data, e.g. SELECT func1() WHERE col = func2()\n>\n\nWith the current version patch (v23), it supports only\nINSERT/DELETE/UPDATE. But I'm going to change the patch so that it\nsupports other writes SQLs as Fujii-san also pointed out.\n\n>\n> (3)\n> Does the 2PC processing always go through the background worker?\n> Is the group commit effective on the remote server? That is, PREPARE and COMMIT PREPARED issued from multiple remote sessions are written to WAL in batch?\n\nNo, in the current design, the backend who received a query from the\nclient does PREPARE, and then the transaction resolver process, a\nbackground worker, does COMMIT PREPARED.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jul 2020 12:08:09 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\nI have briefly checked the only oracle_fdw but in general I think that\r\n> if an existing FDW supports transaction begin, commit, and rollback,\r\n> these can be ported to new FDW transaction APIs easily.\r\n\r\nDoes oracle_fdw support begin, commit and rollback?\r\n\r\nAnd most importantly, do other major DBMSs, including Oracle, provide the API for preparing a transaction? In other words, will the FDWs other than postgres_fdw really be able to take advantage of the new FDW functions to join the 2PC processing? I think we need to confirm that there are concrete examples.\r\n\r\nWhat I'm worried is that if only postgres_fdw can implement the prepare function, it's a sign that FDW interface will be riddled with functions only for Postgres. That is, the FDW interface is getting away from its original purpose \"access external data as a relation\" and complex. Tomas Vondra showed this concern as follows:\r\n\r\nHorizontal scalability/sharding \r\nhttps://www.postgresql.org/message-id/flat/CANP8%2BjK%3D%2B3zVYDFY0oMAQKQVJ%2BqReDHr1UPdyFEELO82yVfb9A%40mail.gmail.com#2c45f0ee97855449f1f7fedcef1d5e11\r\n\r\n\r\n[Tomas Vondra's remarks]\r\n--------------------------------------------------\r\n> This strikes me as a bit of a conflict of interest with FDW which\r\n> seems to want to hide the fact that it's foreign; the FDW\r\n> implementation makes it's own optimization decisions which might\r\n> make sense for single table queries but breaks down in the face of\r\n> joins.\r\n\r\n+1 to these concerns\r\n\r\nIn my mind, FDW is a wonderful tool to integrate PostgreSQL with \r\nexternal data sources, and it's nicely shaped for this purpose, which \r\nimplies the abstractions and assumptions in the code.\r\n\r\nThe truth however is that many current uses of the FDW API are actually \r\nusing it for different purposes because there's no other way to do that, \r\nnot because FDWs are the \"right way\". And this includes the attempts to \r\nbuild sharding on FDW, I think.\r\n\r\nSituations like this result in \"improvements\" of the API that seem to \r\nimprove the API for the second group, but make the life harder for the \r\noriginal FDW API audience by making the API needlessly complex. And I \r\nsay \"seem to improve\" because the second group eventually runs into the \r\nfundamental abstractions and assumptions the API is based on anyway.\r\n\r\nAnd based on the discussions at pgcon, I think this is the main reason \r\nwhy people cringe when they hear \"FDW\" and \"sharding\" in the same sentence.\r\n\r\n...\r\nMy other worry is that we'll eventually mess the FDW infrastructure, \r\nmaking it harder to use for the original purpose. Granted, most of the \r\nimprovements proposed so far look sane and useful for FDWs in general, \r\nbut sooner or later that ceases to be the case - there sill be changes \r\nneeded merely for the sharding. Those will be tough decisions.\r\n--------------------------------------------------\r\n\r\n\r\n> Regarding the comparison between FDW transaction APIs and transaction\r\n> callbacks, I think one of the benefits of providing FDW transaction\r\n> APIs is that the core is able to manage the status of foreign\r\n> transactions. We need to track the status of individual foreign\r\n> transactions to support atomic commit. If we use transaction callbacks\r\n> (XactCallback) that many FDWs are using, I think we will end up\r\n> calling the transaction callback and leave the transaction work to\r\n> FDWs, leading that the core is not able to know the return values of\r\n> PREPARE TRANSACTION for example. We can add more arguments passed to\r\n> transaction callbacks to get the return value from FDWs but I don’t\r\n> think it’s a good idea as transaction callbacks are used not only by\r\n> FDW but also other external modules.\r\n\r\nTo track the foreign transaction status, we can add GetTransactionStatus() to the FDW interface as an alternative, can't we?\r\n\r\n\r\n> With the current version patch (v23), it supports only\r\n> INSERT/DELETE/UPDATE. But I'm going to change the patch so that it\r\n> supports other writes SQLs as Fujii-san also pointed out.\r\n\r\nOK. I've just read that Fujii san already pointed out a similar thing. But I wonder if we can know that the UDF executed on the foreign server has updated data. Maybe we can know or guess it by calling txid_current_if_any() or checking the transaction status in FE/BE protocol, but can we deal with other FDWs other than postgres_fdw?\r\n\r\n\r\n> No, in the current design, the backend who received a query from the\r\n> client does PREPARE, and then the transaction resolver process, a\r\n> background worker, does COMMIT PREPARED.\r\n\r\nThis \"No\" means the current implementation cannot group commits from multiple transactions?\r\nDoes the transaction resolver send COMMIT PREPARED and waits for its response for each transaction one by one? For example,\r\n\r\n[local server]\r\nTransaction T1 and T2 performs 2PC at the same time.\r\nTransaction resolver sends COMMIT PREPARED for T1 and then waits for the response.\r\nT1 writes COMMIT PREPARED record locally and sync the WAL.\r\nTransaction resolver sends COMMIT PREPARED for T2 and then waits for the response.\r\nT2 writes COMMIT PREPARED record locally and sync the WAL.\r\n\r\n[foreign server]\r\nT1 writes COMMIT PREPARED record locally and sync the WAL.\r\nT2 writes COMMIT PREPARED record locally and sync the WAL.\r\n\r\nIf the WAL records of multiple concurrent transactions are written and synced separately, i.e. group commit doesn't take effect, then the OLTP transaction performance will be unacceptable.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Fri, 17 Jul 2020 05:21:05 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 2020-07-17 at 05:21 +0000, tsunakawa.takay@fujitsu.com wrote:\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> I have briefly checked the only oracle_fdw but in general I think that\n> > if an existing FDW supports transaction begin, commit, and rollback,\n> > these can be ported to new FDW transaction APIs easily.\n> \n> Does oracle_fdw support begin, commit and rollback?\n\nYes.\n\n> And most importantly, do other major DBMSs, including Oracle, provide the API for\n> preparing a transaction? In other words, will the FDWs other than postgres_fdw\n> really be able to take advantage of the new FDW functions to join the 2PC processing?\n> I think we need to confirm that there are concrete examples.\n\nI bet they do. There is even a standard for that.\n\nI am not looking forward to adapting oracle_fdw, and I didn't read the patch.\n\nBut using distributed transactions is certainly a good thing if it is done right.\n\nThe trade off is the need for a transaction manager, and implementing that\ncorrectly is a high price to pay.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 17 Jul 2020 08:49:08 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 17 Jul 2020 at 11:06, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2020-07-16 13:16, Masahiko Sawada wrote:\n> > On Tue, 14 Jul 2020 at 17:24, Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n> > wrote:\n> >>\n> >> > I've attached the latest version patches. I've incorporated the review\n> >> > comments I got so far and improved locking strategy.\n> >>\n> >> I want to ask a question about streaming replication with 2PC.\n> >> Are you going to support 2PC with streaming replication?\n> >>\n> >> I tried streaming replication using v23 patches.\n> >> I confirm that 2PC works with streaming replication,\n> >> which there are primary/standby coordinator.\n> >>\n> >> But, in my understanding, the WAL of \"PREPARE\" and\n> >> \"COMMIT/ABORT PREPARED\" can't be replicated to the standby server in\n> >> sync.\n> >>\n> >> If this is right, the unresolved transaction can be occurred.\n> >>\n> >> For example,\n> >>\n> >> 1. PREPARE is done\n> >> 2. crash primary before the WAL related to PREPARE is\n> >> replicated to the standby server\n> >> 3. promote standby server // but can't execute \"ABORT PREPARED\"\n> >>\n> >> In above case, the remote server has the unresolved transaction.\n> >> Can we solve this problem to support in-sync replication?\n> >>\n> >> But, I think some users use async replication for performance.\n> >> Do we need to document the limitation or make another solution?\n> >>\n> >\n> > IIUC with synchronous replication, we can guarantee that WAL records\n> > are written on both primary and replicas when the client got an\n> > acknowledgment of commit. We don't replicate each WAL records\n> > generated during transaction one by one in sync. In the case you\n> > described, the client will get an error due to the server crash.\n> > Therefore I think the user cannot expect WAL records generated so far\n> > has been replicated. The same issue could happen also when the user\n> > executes PREPARE TRANSACTION and the server crashes.\n>\n> Thanks! I didn't noticed the behavior when a user executes PREPARE\n> TRANSACTION is same.\n>\n> IIUC with 2PC, there is a different point between (1)PREPARE TRANSACTION\n> and (2)2PC.\n> The point is that whether the client can know when the server crashed\n> and it's global tx id.\n>\n> If (1)PREPARE TRANSACTION is failed, it's ok the client execute same\n> command\n> because if the remote server is already prepared the command will be\n> ignored.\n>\n> But, if (2)2PC is failed with coordinator crash, the client can't know\n> what operations should be done.\n>\n> If the old coordinator already executed PREPARED, there are some\n> transaction which should be ABORT PREPARED.\n> But if the PREPARED WAL is not sent to the standby, the new coordinator\n> can't execute ABORT PREPARED.\n> And the client can't know which remote servers have PREPARED\n> transactions which should be ABORTED either.\n>\n> Even if the client can know that, only the old coordinator knows its\n> global transaction id.\n> Only the database administrator can analyze the old coordinator's log\n> and then execute the appropriate commands manually, right?\n\nI think that's right. In the case of the coordinator crash, the user\ncan look orphaned foreign prepared transactions by checking the\n'identifier' column of pg_foreign_xacts on the new standby server and\nthe prepared transactions on the remote servers.\n\n>\n>\n> > To prevent this\n> > issue, I think we would need to send each WAL records in sync but I'm\n> > not sure it's reasonable behavior, and as long as we write WAL in the\n> > local and then send it to replicas we would need a smart mechanism to\n> > prevent this situation.\n>\n> I agree. To send each 2PC WAL records in sync must be with a large\n> performance impact.\n> At least, we need to document the limitation and how to handle this\n> situation.\n\nOk. I'll add it.\n\n>\n>\n> > Related to the pointing out by Ikeda-san, I realized that with the\n> > current patch the backend waits for synchronous replication and then\n> > waits for foreign transaction resolution. But it should be reversed.\n> > Otherwise, it could lead to data loss even when the client got an\n> > acknowledgment of commit. Also, when the user is using both atomic\n> > commit and synchronous replication and wants to cancel waiting, he/she\n> > will need to press ctl-c twice with the current patch, which also\n> > should be fixed.\n>\n> I'm sorry that I can't understood.\n>\n> In my understanding, if COMMIT WAL is replicated to the standby in sync,\n> the standby server can resolve the transaction after crash recovery in\n> promoted phase.\n>\n> If reversed, there are some situation which can't guarantee atomic\n> commit.\n> In case that some foreign transaction resolutions are succeed but others\n> are failed(and COMMIT WAL is not replicated),\n> the standby must ABORT PREPARED because the COMMIT WAL is not\n> replicated.\n> This means that some foreign transactions are COMMITE PREPARED executed\n> by primary coordinator,\n> other foreign transactions can be ABORT PREPARED executed by secondary\n> coordinator.\n\nYou're right. Thank you for pointing out!\n\nIf the coordinator crashes after the client gets acknowledgment of the\nsuccessful commit of the transaction but before sending\nXLOG_FDWXACT_REMOVE record to the replicas, the FdwXact entries are\nleft on the replicas even after failover. But since we require FDW to\ntolerate the error of undefined prepared transactions in\nCOMMIT/ROLLBACK PREPARED it won’t be a critical problem.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jul 2020 15:55:32 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Laurenz Albe <laurenz.albe@cybertec.at>\r\n> On Fri, 2020-07-17 at 05:21 +0000, tsunakawa.takay@fujitsu.com wrote:\r\n> > And most importantly, do other major DBMSs, including Oracle, provide the\r\n> API for\r\n> > preparing a transaction? In other words, will the FDWs other than\r\n> postgres_fdw\r\n> > really be able to take advantage of the new FDW functions to join the 2PC\r\n> processing?\r\n> > I think we need to confirm that there are concrete examples.\r\n> \r\n> I bet they do. There is even a standard for that.\r\n\r\nIf you're thinking of xa_prepare() defined in the X/Open XA specification, we need to be sure that other FDWs can really utilize this new 2PC mechanism. What I'm especially wondering is when the FDW can call xa_start().\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Fri, 17 Jul 2020 08:29:08 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 17 Jul 2020 at 14:22, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> I have briefly checked the only oracle_fdw but in general I think that\n> > if an existing FDW supports transaction begin, commit, and rollback,\n> > these can be ported to new FDW transaction APIs easily.\n>\n> Does oracle_fdw support begin, commit and rollback?\n>\n> And most importantly, do other major DBMSs, including Oracle, provide the API for preparing a transaction? In other words, will the FDWs other than postgres_fdw really be able to take advantage of the new FDW functions to join the 2PC processing? I think we need to confirm that there are concrete examples.\n\nI also believe they do. But I'm concerned that some FDW needs to start\na transaction differently when using 2PC. For instance, IIUC MySQL\nalso supports 2PC but the transaction needs to be started with \"XA\nSTART id” when the transaction needs to be prepared. The transaction\nstarted with XA START can be closed by XA END followed by XA PREPARE\nor XA COMMIT ONE PHASE. It means that when starts a new transaction\nthe transaction needs to prepare the transaction identifier and to\nknow that 2PC might be used. It’s quite different from PostgreSQL. In\nPostgreSQL, we can start a transaction by BEGIN and end it by PREPARE\nTRANSACTION, COMMIT, or ROLLBACK. The transaction identifier is\nrequired when PREPARE TRANSACTION.\n\nWith MySQL, I guess FDW needs a way to tell the (next) transaction\nneeds to be started with XA START so it can be prepared. It could be a\ncustom GUC or an SQL function. Then when starts a new transaction on\nMySQL server, FDW can generate and store a transaction identifier into\nsomewhere alongside the connection. At the prepare phase, it passes\nthe transaction identifier via GetPrepareId() API to the core.\n\nI haven’t tested the above yet and it’s just a desk plan. it's\ndefinitely a good idea to try integrating this 2PC feature to FDWs\nother than postgres_fdw to see if design and interfaces are\nimplemented sophisticatedly.\n\n>\n> What I'm worried is that if only postgres_fdw can implement the prepare function, it's a sign that FDW interface will be riddled with functions only for Postgres. That is, the FDW interface is getting away from its original purpose \"access external data as a relation\" and complex. Tomas Vondra showed this concern as follows:\n>\n> Horizontal scalability/sharding\n> https://www.postgresql.org/message-id/flat/CANP8%2BjK%3D%2B3zVYDFY0oMAQKQVJ%2BqReDHr1UPdyFEELO82yVfb9A%40mail.gmail.com#2c45f0ee97855449f1f7fedcef1d5e11\n>\n>\n> [Tomas Vondra's remarks]\n> --------------------------------------------------\n> > This strikes me as a bit of a conflict of interest with FDW which\n> > seems to want to hide the fact that it's foreign; the FDW\n> > implementation makes it's own optimization decisions which might\n> > make sense for single table queries but breaks down in the face of\n> > joins.\n>\n> +1 to these concerns\n>\n> In my mind, FDW is a wonderful tool to integrate PostgreSQL with\n> external data sources, and it's nicely shaped for this purpose, which\n> implies the abstractions and assumptions in the code.\n>\n> The truth however is that many current uses of the FDW API are actually\n> using it for different purposes because there's no other way to do that,\n> not because FDWs are the \"right way\". And this includes the attempts to\n> build sharding on FDW, I think.\n>\n> Situations like this result in \"improvements\" of the API that seem to\n> improve the API for the second group, but make the life harder for the\n> original FDW API audience by making the API needlessly complex. And I\n> say \"seem to improve\" because the second group eventually runs into the\n> fundamental abstractions and assumptions the API is based on anyway.\n>\n> And based on the discussions at pgcon, I think this is the main reason\n> why people cringe when they hear \"FDW\" and \"sharding\" in the same sentence.\n>\n> ...\n> My other worry is that we'll eventually mess the FDW infrastructure,\n> making it harder to use for the original purpose. Granted, most of the\n> improvements proposed so far look sane and useful for FDWs in general,\n> but sooner or later that ceases to be the case - there sill be changes\n> needed merely for the sharding. Those will be tough decisions.\n> --------------------------------------------------\n>\n>\n> > Regarding the comparison between FDW transaction APIs and transaction\n> > callbacks, I think one of the benefits of providing FDW transaction\n> > APIs is that the core is able to manage the status of foreign\n> > transactions. We need to track the status of individual foreign\n> > transactions to support atomic commit. If we use transaction callbacks\n> > (XactCallback) that many FDWs are using, I think we will end up\n> > calling the transaction callback and leave the transaction work to\n> > FDWs, leading that the core is not able to know the return values of\n> > PREPARE TRANSACTION for example. We can add more arguments passed to\n> > transaction callbacks to get the return value from FDWs but I don’t\n> > think it’s a good idea as transaction callbacks are used not only by\n> > FDW but also other external modules.\n>\n> To track the foreign transaction status, we can add GetTransactionStatus() to the FDW interface as an alternative, can't we?\n\nI haven't thought such an interface but it sounds like the transaction\nstatus is managed on both the core and FDWs. Could you elaborate on\nthat?\n\n>\n>\n> > With the current version patch (v23), it supports only\n> > INSERT/DELETE/UPDATE. But I'm going to change the patch so that it\n> > supports other writes SQLs as Fujii-san also pointed out.\n>\n> OK. I've just read that Fujii san already pointed out a similar thing. But I wonder if we can know that the UDF executed on the foreign server has updated data. Maybe we can know or guess it by calling txid_current_if_any() or checking the transaction status in FE/BE protocol, but can we deal with other FDWs other than postgres_fdw?\n\nAh, my answer was not enough. It was only about tracking local writes.\n\nRegarding tracking of writes on the foreign server, I think there are\nrestrictions. Currently, the executor registers a foreign sever as a\nparticipant of 2PC before calling BeginForeignInsert(),\nBeginForeignModify(), and BeginForeignScan() etc with a flag\nindicating whether writes is going to happen on the foreign server. So\neven if an UDF in a SELECT statement that could update data were to be\npushed down to the foreign server, the foreign server would be marked\nas *not* modified. I’ve not tested yet but I guess that since FDW also\nis allowed to register the foreign server along with that flag anytime\nbefore commit, FDW is able to forcibly change that flag if it knows\nthe SELECT query is going to modify the data on the remote server.\n\n>\n>\n> > No, in the current design, the backend who received a query from the\n> > client does PREPARE, and then the transaction resolver process, a\n> > background worker, does COMMIT PREPARED.\n>\n> This \"No\" means the current implementation cannot group commits from multiple transactions?\n\nYes.\n\n> Does the transaction resolver send COMMIT PREPARED and waits for its response for each transaction one by one? For example,\n>\n> [local server]\n> Transaction T1 and T2 performs 2PC at the same time.\n> Transaction resolver sends COMMIT PREPARED for T1 and then waits for the response.\n> T1 writes COMMIT PREPARED record locally and sync the WAL.\n> Transaction resolver sends COMMIT PREPARED for T2 and then waits for the response.\n> T2 writes COMMIT PREPARED record locally and sync the WAL.\n>\n> [foreign server]\n> T1 writes COMMIT PREPARED record locally and sync the WAL.\n> T2 writes COMMIT PREPARED record locally and sync the WAL.\n\nJust to be clear, the transaction resolver writes FDWXACT_REMOVE\nrecords instead of COMMIT PREPARED record to remove foreign\ntransaction entry. But, yes, the transaction resolver works like the\nabove you explained.\n\n> If the WAL records of multiple concurrent transactions are written and synced separately, i.e. group commit doesn't take effect, then the OLTP transaction performance will be unacceptable.\n\nI agree that it'll be a large performance penalty. I'd like to have it\nbut I’m not sure we should have it in the first version from the\nperspective of complexity. Since the procedure of 2PC is originally\nhigh cost, in my opinion, the user should not use as much as possible\nin terms of performance. Especially in OLTP, its cost will directly\naffect the latency. I’d suggest designing database schema so\ntransaction touches only one foreign server but do you have concrete\nOLTP usecase where normally requires 2PC, and how many servers\ninvolved within a distributed transaction?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jul 2020 20:04:44 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/07/17 20:04, Masahiko Sawada wrote:\n> On Fri, 17 Jul 2020 at 14:22, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n>>\n>> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n>> I have briefly checked the only oracle_fdw but in general I think that\n>>> if an existing FDW supports transaction begin, commit, and rollback,\n>>> these can be ported to new FDW transaction APIs easily.\n>>\n>> Does oracle_fdw support begin, commit and rollback?\n>>\n>> And most importantly, do other major DBMSs, including Oracle, provide the API for preparing a transaction? In other words, will the FDWs other than postgres_fdw really be able to take advantage of the new FDW functions to join the 2PC processing? I think we need to confirm that there are concrete examples.\n> \n> I also believe they do. But I'm concerned that some FDW needs to start\n> a transaction differently when using 2PC. For instance, IIUC MySQL\n> also supports 2PC but the transaction needs to be started with \"XA\n> START id” when the transaction needs to be prepared. The transaction\n> started with XA START can be closed by XA END followed by XA PREPARE\n> or XA COMMIT ONE PHASE.\n\nThis means that FDW should provide also the API for xa_end()?\nMaybe we need to consider again which API we should provide in FDW,\nbased on XA specification?\n\n\n> It means that when starts a new transaction\n> the transaction needs to prepare the transaction identifier and to\n> know that 2PC might be used. It’s quite different from PostgreSQL. In\n> PostgreSQL, we can start a transaction by BEGIN and end it by PREPARE\n> TRANSACTION, COMMIT, or ROLLBACK. The transaction identifier is\n> required when PREPARE TRANSACTION.\n> \n> With MySQL, I guess FDW needs a way to tell the (next) transaction\n> needs to be started with XA START so it can be prepared. It could be a\n> custom GUC or an SQL function. Then when starts a new transaction on\n> MySQL server, FDW can generate and store a transaction identifier into\n> somewhere alongside the connection. At the prepare phase, it passes\n> the transaction identifier via GetPrepareId() API to the core.\n> \n> I haven’t tested the above yet and it’s just a desk plan. it's\n> definitely a good idea to try integrating this 2PC feature to FDWs\n> other than postgres_fdw to see if design and interfaces are\n> implemented sophisticatedly.\n\nWith the current patch, we track whether write queries are executed\nin each server. Then, if the number of servers that execute write queries\nis less than two, 2PC is skipped. This \"optimization\" is not necessary\n(cannot be applied) when using mysql_fdw because the transaction starts\nwith XA START. Right?\n\nIf that's the \"optimization\" only for postgres_fdw, maybe it's better to\nget rid of that \"optimization\" from the first patch, to make the patch simpler.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 18 Jul 2020 01:45:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/07/16 14:47, Masahiko Sawada wrote:\n> On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n>>>> I've attached the latest version patches. I've incorporated the review\n>>>> comments I got so far and improved locking strategy.\n>>>\n>>> Thanks for updating the patch!\n>>\n>> +1\n>> I'm interested in these patches and now studying them. While checking\n>> the behaviors of the patched PostgreSQL, I got three comments.\n> \n> Thank you for testing this patch!\n> \n>>\n>> 1. We can access to the foreign table even during recovery in the HEAD.\n>> But in the patched version, when I did that, I got the following error.\n>> Is this intentional?\n>>\n>> ERROR: cannot assign TransactionIds during recovery\n> \n> No, it should be fixed. I'm going to fix this by not collecting\n> participants for atomic commit during recovery.\n\nThanks for trying to fix the issues!\n\nI'd like to report one more issue. When I started new transaction\nin the local server, executed INSERT in the remote server via\npostgres_fdw and then quit psql, I got the following assertion failure.\n\nTRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n3 postgres 0x000000010d313fe3 shmem_exit + 179\n4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n5 postgres 0x000000010d313da3 proc_exit + 19\n6 postgres 0x000000010d35112f PostgresMain + 3711\n7 postgres 0x000000010d27bb3a BackendRun + 570\n8 postgres 0x000000010d27af6b BackendStartup + 475\n9 postgres 0x000000010d279ed1 ServerLoop + 593\n10 postgres 0x000000010d277940 PostmasterMain + 6016\n11 postgres 0x000000010d1597b9 main + 761\n12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n13 ??? 0x0000000000000003 0x0 + 3\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 18 Jul 2020 01:55:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On 2020-07-17 15:55, Masahiko Sawada wrote:\n> On Fri, 17 Jul 2020 at 11:06, Masahiro Ikeda <ikedamsh@oss.nttdata.com> \n> wrote:\n>> \n>> On 2020-07-16 13:16, Masahiko Sawada wrote:\n>> > On Tue, 14 Jul 2020 at 17:24, Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n>> > wrote:\n>> >>\n>> >> > I've attached the latest version patches. I've incorporated the review\n>> >> > comments I got so far and improved locking strategy.\n>> >>\n>> >> I want to ask a question about streaming replication with 2PC.\n>> >> Are you going to support 2PC with streaming replication?\n>> >>\n>> >> I tried streaming replication using v23 patches.\n>> >> I confirm that 2PC works with streaming replication,\n>> >> which there are primary/standby coordinator.\n>> >>\n>> >> But, in my understanding, the WAL of \"PREPARE\" and\n>> >> \"COMMIT/ABORT PREPARED\" can't be replicated to the standby server in\n>> >> sync.\n>> >>\n>> >> If this is right, the unresolved transaction can be occurred.\n>> >>\n>> >> For example,\n>> >>\n>> >> 1. PREPARE is done\n>> >> 2. crash primary before the WAL related to PREPARE is\n>> >> replicated to the standby server\n>> >> 3. promote standby server // but can't execute \"ABORT PREPARED\"\n>> >>\n>> >> In above case, the remote server has the unresolved transaction.\n>> >> Can we solve this problem to support in-sync replication?\n>> >>\n>> >> But, I think some users use async replication for performance.\n>> >> Do we need to document the limitation or make another solution?\n>> >>\n>> >\n>> > IIUC with synchronous replication, we can guarantee that WAL records\n>> > are written on both primary and replicas when the client got an\n>> > acknowledgment of commit. We don't replicate each WAL records\n>> > generated during transaction one by one in sync. In the case you\n>> > described, the client will get an error due to the server crash.\n>> > Therefore I think the user cannot expect WAL records generated so far\n>> > has been replicated. The same issue could happen also when the user\n>> > executes PREPARE TRANSACTION and the server crashes.\n>> \n>> Thanks! I didn't noticed the behavior when a user executes PREPARE\n>> TRANSACTION is same.\n>> \n>> IIUC with 2PC, there is a different point between (1)PREPARE \n>> TRANSACTION\n>> and (2)2PC.\n>> The point is that whether the client can know when the server crashed\n>> and it's global tx id.\n>> \n>> If (1)PREPARE TRANSACTION is failed, it's ok the client execute same\n>> command\n>> because if the remote server is already prepared the command will be\n>> ignored.\n>> \n>> But, if (2)2PC is failed with coordinator crash, the client can't know\n>> what operations should be done.\n>> \n>> If the old coordinator already executed PREPARED, there are some\n>> transaction which should be ABORT PREPARED.\n>> But if the PREPARED WAL is not sent to the standby, the new \n>> coordinator\n>> can't execute ABORT PREPARED.\n>> And the client can't know which remote servers have PREPARED\n>> transactions which should be ABORTED either.\n>> \n>> Even if the client can know that, only the old coordinator knows its\n>> global transaction id.\n>> Only the database administrator can analyze the old coordinator's log\n>> and then execute the appropriate commands manually, right?\n> \n> I think that's right. In the case of the coordinator crash, the user\n> can look orphaned foreign prepared transactions by checking the\n> 'identifier' column of pg_foreign_xacts on the new standby server and\n> the prepared transactions on the remote servers.\n\nI think there is a case we can't check orphaned foreign\nprepared transaction in pg_foreign_xacts view on the new standby server.\nIt confuses users and database administrators.\n\nIf the primary coordinator crashes after preparing foreign transaction,\nbut before sending XLOG_FDWXACT_INSERT records to the standby server,\nthe standby server can't restore their transaction status and\npg_foreign_xacts view doesn't show the prepared foreign transactions.\n\nTo send XLOG_FDWXACT_INSERT records asynchronously leads this problem.\n\n>> > To prevent this\n>> > issue, I think we would need to send each WAL records in sync but I'm\n>> > not sure it's reasonable behavior, and as long as we write WAL in the\n>> > local and then send it to replicas we would need a smart mechanism to\n>> > prevent this situation.\n>> \n>> I agree. To send each 2PC WAL records in sync must be with a large\n>> performance impact.\n>> At least, we need to document the limitation and how to handle this\n>> situation.\n> \n> Ok. I'll add it.\n\nThanks a lot.\n\n>> > Related to the pointing out by Ikeda-san, I realized that with the\n>> > current patch the backend waits for synchronous replication and then\n>> > waits for foreign transaction resolution. But it should be reversed.\n>> > Otherwise, it could lead to data loss even when the client got an\n>> > acknowledgment of commit. Also, when the user is using both atomic\n>> > commit and synchronous replication and wants to cancel waiting, he/she\n>> > will need to press ctl-c twice with the current patch, which also\n>> > should be fixed.\n>> \n>> I'm sorry that I can't understood.\n>> \n>> In my understanding, if COMMIT WAL is replicated to the standby in \n>> sync,\n>> the standby server can resolve the transaction after crash recovery in\n>> promoted phase.\n>> \n>> If reversed, there are some situation which can't guarantee atomic\n>> commit.\n>> In case that some foreign transaction resolutions are succeed but \n>> others\n>> are failed(and COMMIT WAL is not replicated),\n>> the standby must ABORT PREPARED because the COMMIT WAL is not\n>> replicated.\n>> This means that some foreign transactions are COMMITE PREPARED \n>> executed\n>> by primary coordinator,\n>> other foreign transactions can be ABORT PREPARED executed by secondary\n>> coordinator.\n> \n> You're right. Thank you for pointing out!\n> \n> If the coordinator crashes after the client gets acknowledgment of the\n> successful commit of the transaction but before sending\n> XLOG_FDWXACT_REMOVE record to the replicas, the FdwXact entries are\n> left on the replicas even after failover. But since we require FDW to\n> tolerate the error of undefined prepared transactions in\n> COMMIT/ROLLBACK PREPARED it won’t be a critical problem.\n\nI agree. It's ok that the primary coordinator sends\nXLOG_FDWXACT_REMOVE records asynchronously.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 18 Jul 2020 09:44:05 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jul 17, 2020 at 8:38 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 16 Jul 2020 at 13:53, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > Hi Sawada san,\n> >\n> >\n> > I'm reviewing this patch series, and let me give some initial comments and questions. I'm looking at this with a hope that this will be useful purely as a FDW enhancement for our new use cases, regardless of whether the FDW will be used for Postgres scale-out.\n>\n> Thank you for reviewing this patch!\n>\n> Yes, this patch is trying to resolve the generic atomic commit problem\n> w.r.t. FDW, and will be useful also for Postgres scale-out.\n>\n\nI think it is important to get a consensus on this point. If I\nunderstand correctly, Tsunakawa-San doesn't sound to be convinced that\nFDW can be used for postgres scale-out and we are trying to paint this\nfeature as a step forward in the scale-out direction. As per my\nunderstanding, we don't have a very clear vision whether we will be\nable to achieve the other important aspects of scale-out feature like\nglobal visibility if we go in this direction and that is the reason I\nhave insisted in this and the other related thread [1] to at least\nhave a high-level idea of the same before going too far with this\npatch. It is quite possible that after spending months of efforts to\nstraighten out this patch/feature, we came to the conclusion that this\nneed to be re-designed or requires a lot of re-work to ensure that it\ncan be extended for global visibility. It is better to spend some\neffort up front to see if the proposed patch is a stepping stone for\nachieving what we want w.r.t postgres scale-out.\n\n\n[1] - https://www.postgresql.org/message-id/07b2c899-4ed0-4c87-1327-23c750311248%40postgrespro.ru\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 18 Jul 2020 16:33:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> I also believe they do. But I'm concerned that some FDW needs to start\r\n> a transaction differently when using 2PC. For instance, IIUC MySQL\r\n> also supports 2PC but the transaction needs to be started with \"XA\r\n> START id” when the transaction needs to be prepared. The transaction\r\n> started with XA START can be closed by XA END followed by XA PREPARE\r\n> or XA COMMIT ONE PHASE. It means that when starts a new transaction\r\n> the transaction needs to prepare the transaction identifier and to\r\n> know that 2PC might be used. It’s quite different from PostgreSQL. In\r\n> PostgreSQL, we can start a transaction by BEGIN and end it by PREPARE\r\n> TRANSACTION, COMMIT, or ROLLBACK. The transaction identifier is\r\n> required when PREPARE TRANSACTION.\r\n\r\nI guess Postgres is rather a minority in this regard. All I know is XA and its Java counterpart (Java Transaction API: JTA). In XA, the connection needs to be associated with an XID before its transaction work is performed.\r\nIf some transaction work is already done before associating with XID, xa_start() returns an error like this:\r\n\r\n[XA specification]\r\n--------------------------------------------------\r\n[XAER_OUTSIDE] \r\nThe resource manager is doing work outside any global transaction on behalf of \r\nthe application. \r\n--------------------------------------------------\r\n\r\n\r\n[Java Transaction API (JTA)]\r\n--------------------------------------------------\r\nvoid start(Xid xid, int flags) throws XAException \r\n\r\nThis method starts work on behalf of a transaction branch. \r\n...\r\n\r\n3.4.7 Local and Global Transactions \r\nThe resource adapter is encouraged to support the usage of both local and global \r\ntransactions within the same transactional connection. Local transactions are \r\ntransactions that are started and coordinated by the resource manager internally. The \r\nXAResource interface is not used for local transactions. \r\n\r\nWhen using the same connection to perform both local and global transactions, the \r\nfollowing rules apply: \r\n\r\n. The local transaction must be committed (or rolled back) before starting a \r\nglobal transaction in the connection. \r\n. The global transaction must be disassociated from the connection before any \r\nlocal transaction is started. \r\n--------------------------------------------------\r\n\r\n\r\n(FWIW, jdbc_fdw would expect to use JTA for this FDW 2PC?)\r\n\r\n\r\n\r\n> I haven’t tested the above yet and it’s just a desk plan. it's\r\n> definitely a good idea to try integrating this 2PC feature to FDWs\r\n> other than postgres_fdw to see if design and interfaces are\r\n> implemented sophisticatedly.\r\n\r\nYes, if we address this 2PC feature as an FDW enhancement, we need to make sure that at least some well-known DBMSs should be able to implement the new interface. The following part may help devise the interface:\r\n\r\n\r\n[References from XA specification]\r\n--------------------------------------------------\r\nThe primary use of xa_start() is to register a new transaction branch with the RM. \r\nThis marks the start of the branch. Subsequently, the AP, using the same thread of \r\ncontrol, uses the RM’s native interface to do useful work. All requests for service \r\nmade by the same thread are part of the same branch until the thread dissociates \r\nfrom the branch (see below). \r\n\r\n3.3.1 Registration of Resource Managers \r\nNormally, a TM involves all associated RMs in a transaction branch. (The TM’s set of \r\nRM switches, described in Section 4.3 on page 21 tells the TM which RMs are \r\nassociated with it.) The TM calls all these RMs with xa_start(), xa_end(), and \r\nxa_prepare (), although an RM that is not active in a branch need not participate further \r\n(see Section 2.3.2 on page 8). A technique to reduce overhead for infrequently-used \r\nRMs is discussed below. \r\n\r\nDynamic Registration \r\n\r\nCertain RMs, especially those involved in relatively few global transactions, may ask \r\nthe TM to assume they are not involved in a transaction. These RMs must register with \r\nthe TM before they do application work, to see whether the work is part of a global \r\ntransaction. The TM never calls these RMs with any form of xa_start(). An RM \r\ndeclares dynamic registration in its switch (see Section 4.3 on page 21). An RM can \r\nmake this declaration only on its own behalf, and doing so does not change the TM’s \r\nbehaviour with respect to other RMs. \r\n\r\nWhen an AP requests work from such an RM, before doing any work, the RM contacts \r\nthe TM by calling ax_reg(). The RM must call ax_reg() from the same thread of control \r\nthat the AP would use if it called ax_reg() directly. The TM returns to the RM the \r\nappropriate XID if the AP is in a global transaction. \r\n\r\nThe implications of dynamically registering are as follows: when a thread of control \r\nbegins working on behalf of a transaction branch, the transaction manager calls \r\nxa_start() for all resource managers known to the thread except those having \r\nTMREGISTER set in their xa_switch_t structure. Thus, those resource managers with \r\nthis flag set must explicitly join a branch with ax_reg(). Secondly, when a thread of \r\ncontrol is working on behalf of a branch, a transaction manager calls xa_end() for all \r\nresource managers known to the thread that either do not have TMREGISTER set in \r\ntheir xa_switch_t structure or have dynamically registered with ax_reg(). \r\n\r\n\r\nint \r\nxa_start(XID *xid, int rmid, long flags) \r\n\r\nDESCRIPTION \r\nA transaction manager calls xa_start() to inform a resource manager that an application \r\nmay do work on behalf of a transaction branch.\r\n...\r\nA transaction manager calls xa_start() only for those resource managers that do not \r\nhave TMREGISTER set in the flags element of their xa_switch_t structure. Resource \r\nmanagers with TMREGISTER set must use ax_reg() to join a transaction branch (see \r\nax_reg() for details). \r\n--------------------------------------------------\r\n\r\n\r\n> > To track the foreign transaction status, we can add GetTransactionStatus() to\r\n> the FDW interface as an alternative, can't we?\r\n> \r\n> I haven't thought such an interface but it sounds like the transaction\r\n> status is managed on both the core and FDWs. Could you elaborate on\r\n> that?\r\n\r\nI don't have such deep analysis. I just thought that the core could keep track of the local transaction status, and ask each participant FDW about its transaction status to determine an action.\r\n\r\n\r\n> > If the WAL records of multiple concurrent transactions are written and\r\n> synced separately, i.e. group commit doesn't take effect, then the OLTP\r\n> transaction performance will be unacceptable.\r\n> \r\n> I agree that it'll be a large performance penalty. I'd like to have it\r\n> but I’m not sure we should have it in the first version from the\r\n> perspective of complexity.\r\n\r\nI think at least we should have a rough image of how we can reach the goal. Otherwise, the current design/implementation may have to be overhauled with great efforts in the near future. Apart from that, I feel it's unnatural that the commit processing is serialized at the transaction resolver while the DML processing of multiple foreign transactions can be performed in parallel.\r\n\r\n\r\n> Since the procedure of 2PC is originally\r\n> high cost, in my opinion, the user should not use as much as possible\r\n> in terms of performance. Especially in OLTP, its cost will directly\r\n> affect the latency. I’d suggest designing database schema so\r\n> transaction touches only one foreign server but do you have concrete\r\n> OLTP usecase where normally requires 2PC, and how many servers\r\n> involved within a distributed transaction?\r\n\r\nI can't share the details, but some of our customers show interest in Postgres scale-out or FDW 2PC for the following use cases:\r\n\r\n* Multitenant OLTP where the data specific to one tenant is stored on one database server. On the other hand, some data are shared among all tenants, and they are stored on a separate server. The shared data and the tenant-specific data is updated in the same transaction (I don't know the frequency of such transactions.)\r\n\r\n* An IoT use case where each edge database server monitors and tracks the movement of objects in one area. Those edge database servers store the records of objects they manage. When an object gets out of one area and moves to another, the record for the object is moved between the two edge database servers using an atomic distributed transaction.\r\n\r\n(I wonder if TPC-C or TPC-E needs distributed transaction...)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Mon, 20 Jul 2020 07:57:49 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sat, 18 Jul 2020 at 01:45, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/07/17 20:04, Masahiko Sawada wrote:\n> > On Fri, 17 Jul 2020 at 14:22, tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> >>\n> >> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> >> I have briefly checked the only oracle_fdw but in general I think that\n> >>> if an existing FDW supports transaction begin, commit, and rollback,\n> >>> these can be ported to new FDW transaction APIs easily.\n> >>\n> >> Does oracle_fdw support begin, commit and rollback?\n> >>\n> >> And most importantly, do other major DBMSs, including Oracle, provide the API for preparing a transaction? In other words, will the FDWs other than postgres_fdw really be able to take advantage of the new FDW functions to join the 2PC processing? I think we need to confirm that there are concrete examples.\n> >\n> > I also believe they do. But I'm concerned that some FDW needs to start\n> > a transaction differently when using 2PC. For instance, IIUC MySQL\n> > also supports 2PC but the transaction needs to be started with \"XA\n> > START id” when the transaction needs to be prepared. The transaction\n> > started with XA START can be closed by XA END followed by XA PREPARE\n> > or XA COMMIT ONE PHASE.\n>\n> This means that FDW should provide also the API for xa_end()?\n> Maybe we need to consider again which API we should provide in FDW,\n> based on XA specification?\n\nNot sure that we really need the API for xa_end(). It's not necessary\nat least in MySQL case. mysql_fdw can execute either XA END and XA\nPREPARE when FDW prepare API is called or XA END and XA COMMIT ONE\nPHASE when FDW commit API is called with FDWXACT_FLAG_ONEPHASE.\n\n>\n>\n> > It means that when starts a new transaction\n> > the transaction needs to prepare the transaction identifier and to\n> > know that 2PC might be used. It’s quite different from PostgreSQL. In\n> > PostgreSQL, we can start a transaction by BEGIN and end it by PREPARE\n> > TRANSACTION, COMMIT, or ROLLBACK. The transaction identifier is\n> > required when PREPARE TRANSACTION.\n> >\n> > With MySQL, I guess FDW needs a way to tell the (next) transaction\n> > needs to be started with XA START so it can be prepared. It could be a\n> > custom GUC or an SQL function. Then when starts a new transaction on\n> > MySQL server, FDW can generate and store a transaction identifier into\n> > somewhere alongside the connection. At the prepare phase, it passes\n> > the transaction identifier via GetPrepareId() API to the core.\n> >\n> > I haven’t tested the above yet and it’s just a desk plan. it's\n> > definitely a good idea to try integrating this 2PC feature to FDWs\n> > other than postgres_fdw to see if design and interfaces are\n> > implemented sophisticatedly.\n>\n> With the current patch, we track whether write queries are executed\n> in each server. Then, if the number of servers that execute write queries\n> is less than two, 2PC is skipped. This \"optimization\" is not necessary\n> (cannot be applied) when using mysql_fdw because the transaction starts\n> with XA START. Right?\n\nI think we can use XA COMMIT ONE PHASE in MySQL, which both prepares\nand commits the transaction. If the number of servers that executed\nwrite queries is less than two, the core transaction manager calls\nCommitForeignTransaction API with the flag FDWXACT_FLAG_ONEPHASE. That\nway, mysql_fdw can execute XA COMMIT ONE PHASE instead of XA PREPARE,\nfollowing XA END. On the other hand, when the number of such servers\nis greater than or equals to two, the core transaction manager calls\nPrepareForeignTransaction API and then CommitForeignTransactionAPI\nwithout that flag. In this case, mysql_fdw can execute XA END and XA\nPREPARE in PrepareForeignTransaction API call, and then XA COMMIT in\nCommitForeignTransaction API call.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 Jul 2020 17:28:21 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/07/16 14:47, Masahiko Sawada wrote:\n> > On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n> >>>> I've attached the latest version patches. I've incorporated the review\n> >>>> comments I got so far and improved locking strategy.\n> >>>\n> >>> Thanks for updating the patch!\n> >>\n> >> +1\n> >> I'm interested in these patches and now studying them. While checking\n> >> the behaviors of the patched PostgreSQL, I got three comments.\n> >\n> > Thank you for testing this patch!\n> >\n> >>\n> >> 1. We can access to the foreign table even during recovery in the HEAD.\n> >> But in the patched version, when I did that, I got the following error.\n> >> Is this intentional?\n> >>\n> >> ERROR: cannot assign TransactionIds during recovery\n> >\n> > No, it should be fixed. I'm going to fix this by not collecting\n> > participants for atomic commit during recovery.\n>\n> Thanks for trying to fix the issues!\n>\n> I'd like to report one more issue. When I started new transaction\n> in the local server, executed INSERT in the remote server via\n> postgres_fdw and then quit psql, I got the following assertion failure.\n>\n> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n> 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n> 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n> 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n> 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n> 5 postgres 0x000000010d313da3 proc_exit + 19\n> 6 postgres 0x000000010d35112f PostgresMain + 3711\n> 7 postgres 0x000000010d27bb3a BackendRun + 570\n> 8 postgres 0x000000010d27af6b BackendStartup + 475\n> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n> 10 postgres 0x000000010d277940 PostmasterMain + 6016\n> 11 postgres 0x000000010d1597b9 main + 761\n> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n> 13 ??? 0x0000000000000003 0x0 + 3\n>\n\nThank you for reporting the issue!\n\nI've attached the latest version patch that incorporated all comments\nI got so far. I've removed the patch adding the 'prefer' mode of\nforeign_twophase_commit to keep the patch set simple.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 22 Jul 2020 16:41:48 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jul 17, 2020 at 9:56 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/07/16 14:47, Masahiko Sawada wrote:\n> > On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n> >>>> I've attached the latest version patches. I've incorporated the review\n> >>>> comments I got so far and improved locking strategy.\n> >>>\n> >>> Thanks for updating the patch!\n> >>\n> >> +1\n> >> I'm interested in these patches and now studying them. While checking\n> >> the behaviors of the patched PostgreSQL, I got three comments.\n> >\n> > Thank you for testing this patch!\n> >\n> >>\n> >> 1. We can access to the foreign table even during recovery in the HEAD.\n> >> But in the patched version, when I did that, I got the following error.\n> >> Is this intentional?\n> >>\n> >> ERROR: cannot assign TransactionIds during recovery\n> >\n> > No, it should be fixed. I'm going to fix this by not collecting\n> > participants for atomic commit during recovery.\n>\n> Thanks for trying to fix the issues!\n>\n> I'd like to report one more issue. When I started new transaction\n> in the local server, executed INSERT in the remote server via\n> postgres_fdw and then quit psql, I got the following assertion failure.\n>\n> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n> 0 postgres 0x000000010d52f3c0\n> ExceptionalCondition + 160\n> 1 postgres 0x000000010cefbc49\n> ForgetAllFdwXactParticipants + 313\n> 2 postgres 0x000000010cefff14\n> AtProcExit_FdwXact + 20\n> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n> 4 postgres 0x000000010d313e7a\n> proc_exit_prepare + 122\n> 5 postgres 0x000000010d313da3 proc_exit + 19\n> 6 postgres 0x000000010d35112f PostgresMain +\n> 3711\n> 7 postgres 0x000000010d27bb3a BackendRun + 570\n> 8 postgres 0x000000010d27af6b BackendStartup\n> + 475\n> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n> 10 postgres 0x000000010d277940 PostmasterMain\n> + 6016\n> 11 postgres 0x000000010d1597b9 main + 761\n> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n> 13 ??? 0x0000000000000003 0x0 + 3\n>\n\nI have done a test with the latest set of patches shared by Swada and I am\nnot able to reproduce this issue. Started a prepared transaction on the\nlocal server and then did a couple of inserts in a remote table using\npostgres_fdw and the quit psql. I am not able to reproduce the assertion\nfailure.\n\n\n\n>\n> Regards,\n>\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nOn Fri, Jul 17, 2020 at 9:56 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/07/16 14:47, Masahiko Sawada wrote:\n> On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n>>>> I've attached the latest version patches. I've incorporated the review\n>>>> comments I got so far and improved locking strategy.\n>>>\n>>> Thanks for updating the patch!\n>>\n>> +1\n>> I'm interested in these patches and now studying them. While checking\n>> the behaviors of the patched PostgreSQL, I got three comments.\n> \n> Thank you for testing this patch!\n> \n>>\n>> 1. We can access to the foreign table even during recovery in the HEAD.\n>> But in the patched version, when I did that, I got the following error.\n>> Is this intentional?\n>>\n>> ERROR: cannot assign TransactionIds during recovery\n> \n> No, it should be fixed. I'm going to fix this by not collecting\n> participants for atomic commit during recovery.\n\nThanks for trying to fix the issues!\n\nI'd like to report one more issue. When I started new transaction\nin the local server, executed INSERT in the remote server via\npostgres_fdw and then quit psql, I got the following assertion failure.\n\nTRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n3 postgres 0x000000010d313fe3 shmem_exit + 179\n4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n5 postgres 0x000000010d313da3 proc_exit + 19\n6 postgres 0x000000010d35112f PostgresMain + 3711\n7 postgres 0x000000010d27bb3a BackendRun + 570\n8 postgres 0x000000010d27af6b BackendStartup + 475\n9 postgres 0x000000010d279ed1 ServerLoop + 593\n10 postgres 0x000000010d277940 PostmasterMain + 6016\n11 postgres 0x000000010d1597b9 main + 761\n12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n13 ??? 0x0000000000000003 0x0 + 3I have done a test with the latest set of patches shared by Swada and I am not able to reproduce this issue. Started a prepared transaction on the local server and then did a couple of inserts in a remote table using postgres_fdw and the quit psql. I am not able to reproduce the assertion failure. \n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca",
"msg_date": "Thu, 23 Jul 2020 12:34:23 +0500",
"msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> >\n> >\n> >\n> > On 2020/07/16 14:47, Masahiko Sawada wrote:\n> > > On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> > >>\n> > >>\n> > >>\n> > >> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n> > >>>> I've attached the latest version patches. I've incorporated the\n> review\n> > >>>> comments I got so far and improved locking strategy.\n> > >>>\n> > >>> Thanks for updating the patch!\n> > >>\n> > >> +1\n> > >> I'm interested in these patches and now studying them. While checking\n> > >> the behaviors of the patched PostgreSQL, I got three comments.\n> > >\n> > > Thank you for testing this patch!\n> > >\n> > >>\n> > >> 1. We can access to the foreign table even during recovery in the\n> HEAD.\n> > >> But in the patched version, when I did that, I got the following\n> error.\n> > >> Is this intentional?\n> > >>\n> > >> ERROR: cannot assign TransactionIds during recovery\n> > >\n> > > No, it should be fixed. I'm going to fix this by not collecting\n> > > participants for atomic commit during recovery.\n> >\n> > Thanks for trying to fix the issues!\n> >\n> > I'd like to report one more issue. When I started new transaction\n> > in the local server, executed INSERT in the remote server via\n> > postgres_fdw and then quit psql, I got the following assertion failure.\n> >\n> > TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n> > 0 postgres 0x000000010d52f3c0\n> ExceptionalCondition + 160\n> > 1 postgres 0x000000010cefbc49\n> ForgetAllFdwXactParticipants + 313\n> > 2 postgres 0x000000010cefff14\n> AtProcExit_FdwXact + 20\n> > 3 postgres 0x000000010d313fe3 shmem_exit +\n> 179\n> > 4 postgres 0x000000010d313e7a\n> proc_exit_prepare + 122\n> > 5 postgres 0x000000010d313da3 proc_exit + 19\n> > 6 postgres 0x000000010d35112f PostgresMain\n> + 3711\n> > 7 postgres 0x000000010d27bb3a BackendRun +\n> 570\n> > 8 postgres 0x000000010d27af6b\n> BackendStartup + 475\n> > 9 postgres 0x000000010d279ed1 ServerLoop +\n> 593\n> > 10 postgres 0x000000010d277940\n> PostmasterMain + 6016\n> > 11 postgres 0x000000010d1597b9 main + 761\n> > 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n> > 13 ??? 0x0000000000000003 0x0 + 3\n> >\n>\n> Thank you for reporting the issue!\n>\n> I've attached the latest version patch that incorporated all comments\n> I got so far. I've removed the patch adding the 'prefer' mode of\n> foreign_twophase_commit to keep the patch set simple.\n>\n\nI have started to review the patchset. Just a quick comment.\n\nPatch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch\ncontains changes (adding fdwxact includes) for\nsrc/backend/executor/nodeForeignscan.c,\nsrc/backend/executor/nodeModifyTable.c\nand src/backend/executor/execPartition.c files that doesn't seem to be\nrequired with the latest version.\n\n\nThanks\nBest regards\nMuhammad Usama\n\n\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nOn Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/07/16 14:47, Masahiko Sawada wrote:\n> > On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n> >>>> I've attached the latest version patches. I've incorporated the review\n> >>>> comments I got so far and improved locking strategy.\n> >>>\n> >>> Thanks for updating the patch!\n> >>\n> >> +1\n> >> I'm interested in these patches and now studying them. While checking\n> >> the behaviors of the patched PostgreSQL, I got three comments.\n> >\n> > Thank you for testing this patch!\n> >\n> >>\n> >> 1. We can access to the foreign table even during recovery in the HEAD.\n> >> But in the patched version, when I did that, I got the following error.\n> >> Is this intentional?\n> >>\n> >> ERROR: cannot assign TransactionIds during recovery\n> >\n> > No, it should be fixed. I'm going to fix this by not collecting\n> > participants for atomic commit during recovery.\n>\n> Thanks for trying to fix the issues!\n>\n> I'd like to report one more issue. When I started new transaction\n> in the local server, executed INSERT in the remote server via\n> postgres_fdw and then quit psql, I got the following assertion failure.\n>\n> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n> 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n> 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n> 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n> 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n> 5 postgres 0x000000010d313da3 proc_exit + 19\n> 6 postgres 0x000000010d35112f PostgresMain + 3711\n> 7 postgres 0x000000010d27bb3a BackendRun + 570\n> 8 postgres 0x000000010d27af6b BackendStartup + 475\n> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n> 10 postgres 0x000000010d277940 PostmasterMain + 6016\n> 11 postgres 0x000000010d1597b9 main + 761\n> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n> 13 ??? 0x0000000000000003 0x0 + 3\n>\n\nThank you for reporting the issue!\n\nI've attached the latest version patch that incorporated all comments\nI got so far. I've removed the patch adding the 'prefer' mode of\nforeign_twophase_commit to keep the patch set simple.I have started to review the patchset. Just a quick comment.Patch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch contains changes (adding fdwxact includes) forsrc/backend/executor/nodeForeignscan.c, src/backend/executor/nodeModifyTable.cand src/backend/executor/execPartition.c files that doesn't seem to berequired with the latest version.ThanksBest regardsMuhammad Usama \n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 23 Jul 2020 18:50:47 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 23 Jul 2020 at 22:51, Muhammad Usama <m.usama@gmail.com> wrote:\n>\n>\n>\n> On Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> >\n>> >\n>> >\n>> > On 2020/07/16 14:47, Masahiko Sawada wrote:\n>> > > On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> > >>\n>> > >>\n>> > >>\n>> > >> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n>> > >>>> I've attached the latest version patches. I've incorporated the review\n>> > >>>> comments I got so far and improved locking strategy.\n>> > >>>\n>> > >>> Thanks for updating the patch!\n>> > >>\n>> > >> +1\n>> > >> I'm interested in these patches and now studying them. While checking\n>> > >> the behaviors of the patched PostgreSQL, I got three comments.\n>> > >\n>> > > Thank you for testing this patch!\n>> > >\n>> > >>\n>> > >> 1. We can access to the foreign table even during recovery in the HEAD.\n>> > >> But in the patched version, when I did that, I got the following error.\n>> > >> Is this intentional?\n>> > >>\n>> > >> ERROR: cannot assign TransactionIds during recovery\n>> > >\n>> > > No, it should be fixed. I'm going to fix this by not collecting\n>> > > participants for atomic commit during recovery.\n>> >\n>> > Thanks for trying to fix the issues!\n>> >\n>> > I'd like to report one more issue. When I started new transaction\n>> > in the local server, executed INSERT in the remote server via\n>> > postgres_fdw and then quit psql, I got the following assertion failure.\n>> >\n>> > TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n>> > 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n>> > 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n>> > 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n>> > 3 postgres 0x000000010d313fe3 shmem_exit + 179\n>> > 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n>> > 5 postgres 0x000000010d313da3 proc_exit + 19\n>> > 6 postgres 0x000000010d35112f PostgresMain + 3711\n>> > 7 postgres 0x000000010d27bb3a BackendRun + 570\n>> > 8 postgres 0x000000010d27af6b BackendStartup + 475\n>> > 9 postgres 0x000000010d279ed1 ServerLoop + 593\n>> > 10 postgres 0x000000010d277940 PostmasterMain + 6016\n>> > 11 postgres 0x000000010d1597b9 main + 761\n>> > 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n>> > 13 ??? 0x0000000000000003 0x0 + 3\n>> >\n>>\n>> Thank you for reporting the issue!\n>>\n>> I've attached the latest version patch that incorporated all comments\n>> I got so far. I've removed the patch adding the 'prefer' mode of\n>> foreign_twophase_commit to keep the patch set simple.\n>\n>\n> I have started to review the patchset. Just a quick comment.\n>\n> Patch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch\n> contains changes (adding fdwxact includes) for\n> src/backend/executor/nodeForeignscan.c, src/backend/executor/nodeModifyTable.c\n> and src/backend/executor/execPartition.c files that doesn't seem to be\n> required with the latest version.\n\nThanks for your comment.\n\nRight. I've removed these changes on the local branch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 Jul 2020 15:59:45 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/07/27 15:59, Masahiko Sawada wrote:\n> On Thu, 23 Jul 2020 at 22:51, Muhammad Usama <m.usama@gmail.com> wrote:\n>>\n>>\n>>\n>> On Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>>\n>>> On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/07/16 14:47, Masahiko Sawada wrote:\n>>>>> On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n>>>>>>>> I've attached the latest version patches. I've incorporated the review\n>>>>>>>> comments I got so far and improved locking strategy.\n>>>>>>>\n>>>>>>> Thanks for updating the patch!\n>>>>>>\n>>>>>> +1\n>>>>>> I'm interested in these patches and now studying them. While checking\n>>>>>> the behaviors of the patched PostgreSQL, I got three comments.\n>>>>>\n>>>>> Thank you for testing this patch!\n>>>>>\n>>>>>>\n>>>>>> 1. We can access to the foreign table even during recovery in the HEAD.\n>>>>>> But in the patched version, when I did that, I got the following error.\n>>>>>> Is this intentional?\n>>>>>>\n>>>>>> ERROR: cannot assign TransactionIds during recovery\n>>>>>\n>>>>> No, it should be fixed. I'm going to fix this by not collecting\n>>>>> participants for atomic commit during recovery.\n>>>>\n>>>> Thanks for trying to fix the issues!\n>>>>\n>>>> I'd like to report one more issue. When I started new transaction\n>>>> in the local server, executed INSERT in the remote server via\n>>>> postgres_fdw and then quit psql, I got the following assertion failure.\n>>>>\n>>>> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n>>>> 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n>>>> 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n>>>> 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n>>>> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n>>>> 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n>>>> 5 postgres 0x000000010d313da3 proc_exit + 19\n>>>> 6 postgres 0x000000010d35112f PostgresMain + 3711\n>>>> 7 postgres 0x000000010d27bb3a BackendRun + 570\n>>>> 8 postgres 0x000000010d27af6b BackendStartup + 475\n>>>> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n>>>> 10 postgres 0x000000010d277940 PostmasterMain + 6016\n>>>> 11 postgres 0x000000010d1597b9 main + 761\n>>>> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n>>>> 13 ??? 0x0000000000000003 0x0 + 3\n>>>>\n>>>\n>>> Thank you for reporting the issue!\n>>>\n>>> I've attached the latest version patch that incorporated all comments\n>>> I got so far. I've removed the patch adding the 'prefer' mode of\n>>> foreign_twophase_commit to keep the patch set simple.\n>>\n>>\n>> I have started to review the patchset. Just a quick comment.\n>>\n>> Patch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch\n>> contains changes (adding fdwxact includes) for\n>> src/backend/executor/nodeForeignscan.c, src/backend/executor/nodeModifyTable.c\n>> and src/backend/executor/execPartition.c files that doesn't seem to be\n>> required with the latest version.\n> \n> Thanks for your comment.\n> \n> Right. I've removed these changes on the local branch.\n\nThe latest patches failed to be applied to the master branch. Could you rebase the patches?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 Aug 2020 00:36:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 21 Aug 2020 at 00:36, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/07/27 15:59, Masahiko Sawada wrote:\n> > On Thu, 23 Jul 2020 at 22:51, Muhammad Usama <m.usama@gmail.com> wrote:\n> >>\n> >>\n> >>\n> >> On Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> >>>\n> >>> On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/07/16 14:47, Masahiko Sawada wrote:\n> >>>>> On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n> >>>>>>>> I've attached the latest version patches. I've incorporated the review\n> >>>>>>>> comments I got so far and improved locking strategy.\n> >>>>>>>\n> >>>>>>> Thanks for updating the patch!\n> >>>>>>\n> >>>>>> +1\n> >>>>>> I'm interested in these patches and now studying them. While checking\n> >>>>>> the behaviors of the patched PostgreSQL, I got three comments.\n> >>>>>\n> >>>>> Thank you for testing this patch!\n> >>>>>\n> >>>>>>\n> >>>>>> 1. We can access to the foreign table even during recovery in the HEAD.\n> >>>>>> But in the patched version, when I did that, I got the following error.\n> >>>>>> Is this intentional?\n> >>>>>>\n> >>>>>> ERROR: cannot assign TransactionIds during recovery\n> >>>>>\n> >>>>> No, it should be fixed. I'm going to fix this by not collecting\n> >>>>> participants for atomic commit during recovery.\n> >>>>\n> >>>> Thanks for trying to fix the issues!\n> >>>>\n> >>>> I'd like to report one more issue. When I started new transaction\n> >>>> in the local server, executed INSERT in the remote server via\n> >>>> postgres_fdw and then quit psql, I got the following assertion failure.\n> >>>>\n> >>>> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n> >>>> 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n> >>>> 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n> >>>> 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n> >>>> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n> >>>> 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n> >>>> 5 postgres 0x000000010d313da3 proc_exit + 19\n> >>>> 6 postgres 0x000000010d35112f PostgresMain + 3711\n> >>>> 7 postgres 0x000000010d27bb3a BackendRun + 570\n> >>>> 8 postgres 0x000000010d27af6b BackendStartup + 475\n> >>>> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n> >>>> 10 postgres 0x000000010d277940 PostmasterMain + 6016\n> >>>> 11 postgres 0x000000010d1597b9 main + 761\n> >>>> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n> >>>> 13 ??? 0x0000000000000003 0x0 + 3\n> >>>>\n> >>>\n> >>> Thank you for reporting the issue!\n> >>>\n> >>> I've attached the latest version patch that incorporated all comments\n> >>> I got so far. I've removed the patch adding the 'prefer' mode of\n> >>> foreign_twophase_commit to keep the patch set simple.\n> >>\n> >>\n> >> I have started to review the patchset. Just a quick comment.\n> >>\n> >> Patch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch\n> >> contains changes (adding fdwxact includes) for\n> >> src/backend/executor/nodeForeignscan.c, src/backend/executor/nodeModifyTable.c\n> >> and src/backend/executor/execPartition.c files that doesn't seem to be\n> >> required with the latest version.\n> >\n> > Thanks for your comment.\n> >\n> > Right. I've removed these changes on the local branch.\n>\n> The latest patches failed to be applied to the master branch. Could you rebase the patches?\n>\n\nThank you for letting me know. I've attached the latest version patch set.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 21 Aug 2020 15:25:29 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "> On 2020-07-17 15:55, Masahiko Sawada wrote:\n>> On Fri, 17 Jul 2020 at 11:06, Masahiro Ikeda \n>> <ikedamsh(at)oss(dot)nttdata(dot)com>\n>> wrote:\n>>> \n>>> On 2020-07-16 13:16, Masahiko Sawada wrote:\n>>>> On Tue, 14 Jul 2020 at 17:24, Masahiro Ikeda \n>>>> <ikedamsh(at)oss(dot)nttdata(dot)com>\n>>>> wrote:\n>>>>> \n>>>>>> I've attached the latest version patches. I've incorporated the \n>>>>>> review\n>>>>>> comments I got so far and improved locking strategy.\n>>>>> \n>>>>> I want to ask a question about streaming replication with 2PC.\n>>>>> Are you going to support 2PC with streaming replication?\n>>>>> \n>>>>> I tried streaming replication using v23 patches.\n>>>>> I confirm that 2PC works with streaming replication,\n>>>>> which there are primary/standby coordinator.\n>>>>> \n>>>>> But, in my understanding, the WAL of \"PREPARE\" and\n>>>>> \"COMMIT/ABORT PREPARED\" can't be replicated to the standby server \n>>>>> in\n>>>>> sync.\n>>>>> \n>>>>> If this is right, the unresolved transaction can be occurred.\n>>>>> \n>>>>> For example,\n>>>>> \n>>>>> 1. PREPARE is done\n>>>>> 2. crash primary before the WAL related to PREPARE is\n>>>>> replicated to the standby server\n>>>>> 3. promote standby server // but can't execute \"ABORT PREPARED\"\n>>>>> \n>>>>> In above case, the remote server has the unresolved transaction.\n>>>>> Can we solve this problem to support in-sync replication?\n>>>>> \n>>>>> But, I think some users use async replication for performance.\n>>>>> Do we need to document the limitation or make another solution?\n>>>>> \n>>>> \n>>>> IIUC with synchronous replication, we can guarantee that WAL records\n>>>> are written on both primary and replicas when the client got an\n>>>> acknowledgment of commit. We don't replicate each WAL records\n>>>> generated during transaction one by one in sync. In the case you\n>>>> described, the client will get an error due to the server crash.\n>>>> Therefore I think the user cannot expect WAL records generated so \n>>>> far\n>>>> has been replicated. The same issue could happen also when the user\n>>>> executes PREPARE TRANSACTION and the server crashes.\n>>> \n>>> Thanks! I didn't noticed the behavior when a user executes PREPARE\n>>> TRANSACTION is same.\n>>> \n>>> IIUC with 2PC, there is a different point between (1)PREPARE\n>>> TRANSACTION\n>>> and (2)2PC.\n>>> The point is that whether the client can know when the server crashed\n>>> and it's global tx id.\n>>> \n>>> If (1)PREPARE TRANSACTION is failed, it's ok the client execute same\n>>> command\n>>> because if the remote server is already prepared the command will be\n>>> ignored.\n>>> \n>>> But, if (2)2PC is failed with coordinator crash, the client can't \n>>> know\n>>> what operations should be done.\n>>> \n>>> If the old coordinator already executed PREPARED, there are some\n>>> transaction which should be ABORT PREPARED.\n>>> But if the PREPARED WAL is not sent to the standby, the new\n>>> coordinator\n>>> can't execute ABORT PREPARED.\n>>> And the client can't know which remote servers have PREPARED\n>>> transactions which should be ABORTED either.\n>>> \n>>> Even if the client can know that, only the old coordinator knows its\n>>> global transaction id.\n>>> Only the database administrator can analyze the old coordinator's log\n>>> and then execute the appropriate commands manually, right?\n>> \n>> I think that's right. In the case of the coordinator crash, the user\n>> can look orphaned foreign prepared transactions by checking the\n>> 'identifier' column of pg_foreign_xacts on the new standby server and\n>> the prepared transactions on the remote servers.\n>> \n> I think there is a case we can't check orphaned foreign\n> prepared transaction in pg_foreign_xacts view on the new standby \n> server.\n> It confuses users and database administrators.\n> \n> If the primary coordinator crashes after preparing foreign transaction,\n> but before sending XLOG_FDWXACT_INSERT records to the standby server,\n> the standby server can't restore their transaction status and\n> pg_foreign_xacts view doesn't show the prepared foreign transactions.\n> \n> To send XLOG_FDWXACT_INSERT records asynchronously leads this problem.\n\nIf the primary replicates XLOG_FDWXACT_INSERT to the standby \nasynchronously,\nsome prepared transaction may be unsolved forever.\n\nSince I think to solve this inconsistency manually is hard operation,\nwe need to support synchronous XLOG_FDWXACT_INSERT replication.\n\nI understood that there are a lot of impact to the performance,\nbut users can control the consistency/durability vs performance\nwith synchronous_commit parameter.\n\nWhat do you think?\n\n\n> Thank you for letting me know. I've attached the latest version patch \n> set.\n\nThanks for updating.\nBut, the latest patches failed to be applied to the master branch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 28 Aug 2020 17:49:59 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 28 Aug 2020 at 17:50, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> > I think there is a case we can't check orphaned foreign\n> > prepared transaction in pg_foreign_xacts view on the new standby\n> > server.\n> > It confuses users and database administrators.\n> >\n> > If the primary coordinator crashes after preparing foreign transaction,\n> > but before sending XLOG_FDWXACT_INSERT records to the standby server,\n> > the standby server can't restore their transaction status and\n> > pg_foreign_xacts view doesn't show the prepared foreign transactions.\n> >\n> > To send XLOG_FDWXACT_INSERT records asynchronously leads this problem.\n>\n> If the primary replicates XLOG_FDWXACT_INSERT to the standby\n> asynchronously,\n> some prepared transaction may be unsolved forever.\n>\n> Since I think to solve this inconsistency manually is hard operation,\n> we need to support synchronous XLOG_FDWXACT_INSERT replication.\n>\n> I understood that there are a lot of impact to the performance,\n> but users can control the consistency/durability vs performance\n> with synchronous_commit parameter.\n>\n> What do you think?\n\nI think the user can check such prepared transactions by seeing\ntransactions that exist on the foreign server's pg_prepared_xact but\nnot on the coordinator server's pg_foreign_xacts, no? To make checking\nsuch prepared transactions easy, perhaps we could contain the\ntimestamp to prepared transaction id. But I’m concerned the\nduplication of transaction id due to clock skew.\n\nIf there is a way to identify such unresolved foreign transactions and\nit's not cumbersome, given that the likelihood of problem you're\nconcerned is unlikely high I guess a certain number of would be able\nto accept it as a restriction. So I’d recommend not dealing with this\nproblem in the first version patch and we will be able to improve this\nfeature to deal with this problem as an additional feature. Thoughts?\n\n> > Thank you for letting me know. I've attached the latest version patch\n> > set.\n>\n> Thanks for updating.\n> But, the latest patches failed to be applied to the master branch.\n\nI'll submit the updated version patch.\n\nRegards,\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 3 Sep 2020 23:08:14 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On 2020-09-03 23:08, Masahiko Sawada wrote:\n> On Fri, 28 Aug 2020 at 17:50, Masahiro Ikeda <ikedamsh@oss.nttdata.com> \n> wrote:\n>> \n>> > I think there is a case we can't check orphaned foreign\n>> > prepared transaction in pg_foreign_xacts view on the new standby\n>> > server.\n>> > It confuses users and database administrators.\n>> >\n>> > If the primary coordinator crashes after preparing foreign transaction,\n>> > but before sending XLOG_FDWXACT_INSERT records to the standby server,\n>> > the standby server can't restore their transaction status and\n>> > pg_foreign_xacts view doesn't show the prepared foreign transactions.\n>> >\n>> > To send XLOG_FDWXACT_INSERT records asynchronously leads this problem.\n>> \n>> If the primary replicates XLOG_FDWXACT_INSERT to the standby\n>> asynchronously,\n>> some prepared transaction may be unsolved forever.\n>> \n>> Since I think to solve this inconsistency manually is hard operation,\n>> we need to support synchronous XLOG_FDWXACT_INSERT replication.\n>> \n>> I understood that there are a lot of impact to the performance,\n>> but users can control the consistency/durability vs performance\n>> with synchronous_commit parameter.\n>> \n>> What do you think?\n> \n> I think the user can check such prepared transactions by seeing\n> transactions that exist on the foreign server's pg_prepared_xact but\n> not on the coordinator server's pg_foreign_xacts, no? To make checking\n> such prepared transactions easy, perhaps we could contain the\n> timestamp to prepared transaction id. But I’m concerned the\n> duplication of transaction id due to clock skew.\n\nThanks for letting me know.\nI agreed that we can check pg_prepared_xact and pg_foreign_xacts.\n\nWe have to abort the transaction which exists in pg_prepared_xact and\ndoesn't exist in pg_foreign_xacts manually, don't we?\nSo users have to use the foreign database which supports to show\nprepared transaction status like pg_foreign_xacts.\n\nWhen duplication of transaction id is made?\nI'm sorry that I couldn't understand about clock skew.\n\nIICU, since prepared id may have coordinator's xid, there is no clock \nskew\nand we can determine transaction_id uniquely.\nIf the fdw implements GetPrepareId_function API and it generates\ntransaction_id without coordinator's xid, your concern will emerge.\nBut, I can't understand the case to generate transaction_id without \ncoordinator's xid.\n\n> If there is a way to identify such unresolved foreign transactions and\n> it's not cumbersome, given that the likelihood of problem you're\n> concerned is unlikely high I guess a certain number of would be able\n> to accept it as a restriction. So I’d recommend not dealing with this\n> problem in the first version patch and we will be able to improve this\n> feature to deal with this problem as an additional feature. Thoughts?\n\nI agree. Thanks for your comments.\n\n>> > Thank you for letting me know. I've attached the latest version patch\n>> > set.\n>> \n>> Thanks for updating.\n>> But, the latest patches failed to be applied to the master branch.\n> \n> I'll submit the updated version patch.\n\nThanks.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 07 Sep 2020 09:49:21 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 03:25:29PM +0900, Masahiko Sawada wrote:\n> Thank you for letting me know. I've attached the latest version patch set.\n\nThis needs a rebase. Patch 0002 is conflicting with some of the\nrecent changes done in syncrep.c and procarray.c, at least. \n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 11:14:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/08/21 15:25, Masahiko Sawada wrote:\n> On Fri, 21 Aug 2020 at 00:36, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/07/27 15:59, Masahiko Sawada wrote:\n>>> On Thu, 23 Jul 2020 at 22:51, Muhammad Usama <m.usama@gmail.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>>>>\n>>>>> On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/07/16 14:47, Masahiko Sawada wrote:\n>>>>>>> On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n>>>>>>>>>> I've attached the latest version patches. I've incorporated the review\n>>>>>>>>>> comments I got so far and improved locking strategy.\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patch!\n>>>>>>>>\n>>>>>>>> +1\n>>>>>>>> I'm interested in these patches and now studying them. While checking\n>>>>>>>> the behaviors of the patched PostgreSQL, I got three comments.\n>>>>>>>\n>>>>>>> Thank you for testing this patch!\n>>>>>>>\n>>>>>>>>\n>>>>>>>> 1. We can access to the foreign table even during recovery in the HEAD.\n>>>>>>>> But in the patched version, when I did that, I got the following error.\n>>>>>>>> Is this intentional?\n>>>>>>>>\n>>>>>>>> ERROR: cannot assign TransactionIds during recovery\n>>>>>>>\n>>>>>>> No, it should be fixed. I'm going to fix this by not collecting\n>>>>>>> participants for atomic commit during recovery.\n>>>>>>\n>>>>>> Thanks for trying to fix the issues!\n>>>>>>\n>>>>>> I'd like to report one more issue. When I started new transaction\n>>>>>> in the local server, executed INSERT in the remote server via\n>>>>>> postgres_fdw and then quit psql, I got the following assertion failure.\n>>>>>>\n>>>>>> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n>>>>>> 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n>>>>>> 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n>>>>>> 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n>>>>>> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n>>>>>> 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n>>>>>> 5 postgres 0x000000010d313da3 proc_exit + 19\n>>>>>> 6 postgres 0x000000010d35112f PostgresMain + 3711\n>>>>>> 7 postgres 0x000000010d27bb3a BackendRun + 570\n>>>>>> 8 postgres 0x000000010d27af6b BackendStartup + 475\n>>>>>> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n>>>>>> 10 postgres 0x000000010d277940 PostmasterMain + 6016\n>>>>>> 11 postgres 0x000000010d1597b9 main + 761\n>>>>>> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n>>>>>> 13 ??? 0x0000000000000003 0x0 + 3\n>>>>>>\n>>>>>\n>>>>> Thank you for reporting the issue!\n>>>>>\n>>>>> I've attached the latest version patch that incorporated all comments\n>>>>> I got so far. I've removed the patch adding the 'prefer' mode of\n>>>>> foreign_twophase_commit to keep the patch set simple.\n>>>>\n>>>>\n>>>> I have started to review the patchset. Just a quick comment.\n>>>>\n>>>> Patch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch\n>>>> contains changes (adding fdwxact includes) for\n>>>> src/backend/executor/nodeForeignscan.c, src/backend/executor/nodeModifyTable.c\n>>>> and src/backend/executor/execPartition.c files that doesn't seem to be\n>>>> required with the latest version.\n>>>\n>>> Thanks for your comment.\n>>>\n>>> Right. I've removed these changes on the local branch.\n>>\n>> The latest patches failed to be applied to the master branch. Could you rebase the patches?\n>>\n> \n> Thank you for letting me know. I've attached the latest version patch set.\n\nThanks for updating the patch!\n\nIMO it's not easy to commit this 2PC patch at once because it's still large\nand complicated. So I'm thinking it's better to separate the feature into\nseveral parts and commit them gradually. What about separating\nthe feature into the following parts?\n\n#1\nOriginally the server just executed xact callback that each FDW registered\nwhen the transaction was committed. The patch changes this so that\nthe server manages the participants of FDW in the transaction and triggers\nthem to execute COMMIT or ROLLBACK. IMO this change can be applied\nwithout 2PC feature. Thought?\n\nEven if we commit this patch and add new interface for FDW, we would\nneed to keep the old interface, for the FDW providing only old interface.\n\n\n#2\nOriginally when there was the FDW access in the transaction,\nPREPARE TRANSACTION on that transaction failed with an error. The patch\nallows PREPARE TRANSACTION and COMMIT/ROLLBACK PREPARED\neven when FDW access occurs in the transaction. IMO this change can be\napplied without *automatic* 2PC feature (i.e., PREPARE TRANSACTION and\nCOMMIT/ROLLBACK PREPARED are automatically executed for each FDW\ninside \"top\" COMMIT command). Thought?\n\nI'm not sure yet whether automatic resolution of \"unresolved\" prepared\ntransactions by the resolver process is necessary for this change or not.\nIf it's not necessary, it's better to exclude the resolver process from this\nchange, at this stage, to make the patch simpler.\n\n\n#3\nFinally IMO we can provide the patch supporting \"automatic\" 2PC for each FDW,\nbased on the #1 and #2 patches.\n\n\nWhat's your opinion about this?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 7 Sep 2020 17:59:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/09/07 17:59, Fujii Masao wrote:\n> \n> \n> On 2020/08/21 15:25, Masahiko Sawada wrote:\n>> On Fri, 21 Aug 2020 at 00:36, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/07/27 15:59, Masahiko Sawada wrote:\n>>>> On Thu, 23 Jul 2020 at 22:51, Muhammad Usama <m.usama@gmail.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>>>>>\n>>>>>> On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020/07/16 14:47, Masahiko Sawada wrote:\n>>>>>>>> On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n>>>>>>>>>>> I've attached the latest version patches. I've incorporated the review\n>>>>>>>>>>> comments I got so far and improved locking strategy.\n>>>>>>>>>>\n>>>>>>>>>> Thanks for updating the patch!\n>>>>>>>>>\n>>>>>>>>> +1\n>>>>>>>>> I'm interested in these patches and now studying them. While checking\n>>>>>>>>> the behaviors of the patched PostgreSQL, I got three comments.\n>>>>>>>>\n>>>>>>>> Thank you for testing this patch!\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> 1. We can access to the foreign table even during recovery in the HEAD.\n>>>>>>>>> But in the patched version, when I did that, I got the following error.\n>>>>>>>>> Is this intentional?\n>>>>>>>>>\n>>>>>>>>> ERROR: cannot assign TransactionIds during recovery\n>>>>>>>>\n>>>>>>>> No, it should be fixed. I'm going to fix this by not collecting\n>>>>>>>> participants for atomic commit during recovery.\n>>>>>>>\n>>>>>>> Thanks for trying to fix the issues!\n>>>>>>>\n>>>>>>> I'd like to report one more issue. When I started new transaction\n>>>>>>> in the local server, executed INSERT in the remote server via\n>>>>>>> postgres_fdw and then quit psql, I got the following assertion failure.\n>>>>>>>\n>>>>>>> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n>>>>>>> 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n>>>>>>> 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n>>>>>>> 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n>>>>>>> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n>>>>>>> 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n>>>>>>> 5 postgres 0x000000010d313da3 proc_exit + 19\n>>>>>>> 6 postgres 0x000000010d35112f PostgresMain + 3711\n>>>>>>> 7 postgres 0x000000010d27bb3a BackendRun + 570\n>>>>>>> 8 postgres 0x000000010d27af6b BackendStartup + 475\n>>>>>>> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n>>>>>>> 10 postgres 0x000000010d277940 PostmasterMain + 6016\n>>>>>>> 11 postgres 0x000000010d1597b9 main + 761\n>>>>>>> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n>>>>>>> 13 ??? 0x0000000000000003 0x0 + 3\n>>>>>>>\n>>>>>>\n>>>>>> Thank you for reporting the issue!\n>>>>>>\n>>>>>> I've attached the latest version patch that incorporated all comments\n>>>>>> I got so far. I've removed the patch adding the 'prefer' mode of\n>>>>>> foreign_twophase_commit to keep the patch set simple.\n>>>>>\n>>>>>\n>>>>> I have started to review the patchset. Just a quick comment.\n>>>>>\n>>>>> Patch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch\n>>>>> contains changes (adding fdwxact includes) for\n>>>>> src/backend/executor/nodeForeignscan.c, src/backend/executor/nodeModifyTable.c\n>>>>> and src/backend/executor/execPartition.c files that doesn't seem to be\n>>>>> required with the latest version.\n>>>>\n>>>> Thanks for your comment.\n>>>>\n>>>> Right. I've removed these changes on the local branch.\n>>>\n>>> The latest patches failed to be applied to the master branch. Could you rebase the patches?\n>>>\n>>\n>> Thank you for letting me know. I've attached the latest version patch set.\n> \n> Thanks for updating the patch!\n> \n> IMO it's not easy to commit this 2PC patch at once because it's still large\n> and complicated. So I'm thinking it's better to separate the feature into\n> several parts and commit them gradually. What about separating\n> the feature into the following parts?\n> \n> #1\n> Originally the server just executed xact callback that each FDW registered\n> when the transaction was committed. The patch changes this so that\n> the server manages the participants of FDW in the transaction and triggers\n> them to execute COMMIT or ROLLBACK. IMO this change can be applied\n> without 2PC feature. Thought?\n> \n> Even if we commit this patch and add new interface for FDW, we would\n> need to keep the old interface, for the FDW providing only old interface.\n> \n> \n> #2\n> Originally when there was the FDW access in the transaction,\n> PREPARE TRANSACTION on that transaction failed with an error. The patch\n> allows PREPARE TRANSACTION and COMMIT/ROLLBACK PREPARED\n> even when FDW access occurs in the transaction. IMO this change can be\n> applied without *automatic* 2PC feature (i.e., PREPARE TRANSACTION and\n> COMMIT/ROLLBACK PREPARED are automatically executed for each FDW\n> inside \"top\" COMMIT command). Thought?\n> \n> I'm not sure yet whether automatic resolution of \"unresolved\" prepared\n> transactions by the resolver process is necessary for this change or not.\n> If it's not necessary, it's better to exclude the resolver process from this\n> change, at this stage, to make the patch simpler.\n> \n> \n> #3\n> Finally IMO we can provide the patch supporting \"automatic\" 2PC for each FDW,\n> based on the #1 and #2 patches.\n> \n> \n> What's your opinion about this?\n\nAlso I'd like to report some typos in the patch.\n\n+#define ServerSupportTransactionCallack(fdw_part) \\\n\n\"Callack\" in this macro name should be \"Callback\"?\n\n+#define SeverSupportTwophaseCommit(fdw_part) \\\n\n\"Sever\" in this macro name should be \"Server\"?\n\n+ proname => 'pg_stop_foreing_xact_resolver', provolatile => 'v', prorettype => 'bool',\n\n\"foreing\" should be \"foreign\"?\n\n+ * FdwXact entry we call get_preparedid callback to get a transaction\n\n\"get_preparedid\" should be \"get_prepareid\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 7 Sep 2020 23:38:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, Sep 7, 2020 at 2:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> IMO it's not easy to commit this 2PC patch at once because it's still large\n> and complicated. So I'm thinking it's better to separate the feature into\n> several parts and commit them gradually.\n>\n\nHmm, I don't see that we have a consensus on the design and or\ninterfaces of this patch and without that proceeding for commit\ndoesn't seem advisable. Here are a few points which I remember offhand\nthat require more work.\n1. There is a competing design proposed and being discussed in another\nthread [1] for this purpose. I think both the approaches have pros and\ncons but there doesn't seem to be any conclusion yet on which one is\nbetter.\n2. In this thread, we have discussed to try integrating this patch\nwith some other FDWs (say MySQL, mongodb, etc.) to ensure that the\nAPIs we are exposing are general enough that other FDWs can use them\nto implement 2PC. I could see some speculations about the same but no\nconcrete work on the same has been done.\n3. In another thread [1], we have seen that the patch being discussed\nin this thread might need to re-designed if we have to use some other\ndesign for global-visibility than what is proposed in that thread. I\nthink it is quite likely that can happen considering no one is able to\ncome up with the solution to major design problems spotted in that\npatch yet.\n\nIt appears to me that even though these points were raised before in\nsome form we are just trying to bypass them to commit whatever we have\nin the current patch which I find quite surprising.\n\n[1] - https://www.postgresql.org/message-id/07b2c899-4ed0-4c87-1327-23c750311248%40postgrespro.ru\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 8 Sep 2020 07:04:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/09/08 10:34, Amit Kapila wrote:\n> On Mon, Sep 7, 2020 at 2:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> IMO it's not easy to commit this 2PC patch at once because it's still large\n>> and complicated. So I'm thinking it's better to separate the feature into\n>> several parts and commit them gradually.\n>>\n> \n> Hmm, I don't see that we have a consensus on the design and or\n> interfaces of this patch and without that proceeding for commit\n> doesn't seem advisable. Here are a few points which I remember offhand\n> that require more work.\n\nThanks!\n\n> 1. There is a competing design proposed and being discussed in another\n> thread [1] for this purpose. I think both the approaches have pros and\n> cons but there doesn't seem to be any conclusion yet on which one is\n> better.\n\nI was thinking that [1] was discussing global snapshot feature for\n\"atomic visibility\" rather than the solution like 2PC for \"atomic commit\".\nBut if another approach for \"atomic commit\" was also proposed at [1],\nthat's good. I will check that.\n\n> 2. In this thread, we have discussed to try integrating this patch\n> with some other FDWs (say MySQL, mongodb, etc.) to ensure that the\n> APIs we are exposing are general enough that other FDWs can use them\n> to implement 2PC. I could see some speculations about the same but no\n> concrete work on the same has been done.\n\nYes, you're right.\n\n> 3. In another thread [1], we have seen that the patch being discussed\n> in this thread might need to re-designed if we have to use some other\n> design for global-visibility than what is proposed in that thread. I\n> think it is quite likely that can happen considering no one is able to\n> come up with the solution to major design problems spotted in that\n> patch yet.\n\nYou imply that global-visibility patch should be come first before \"2PC\" patch?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 8 Sep 2020 11:35:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Sep 8, 2020 at 8:05 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/09/08 10:34, Amit Kapila wrote:\n> > On Mon, Sep 7, 2020 at 2:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >> IMO it's not easy to commit this 2PC patch at once because it's still large\n> >> and complicated. So I'm thinking it's better to separate the feature into\n> >> several parts and commit them gradually.\n> >>\n> >\n> > Hmm, I don't see that we have a consensus on the design and or\n> > interfaces of this patch and without that proceeding for commit\n> > doesn't seem advisable. Here are a few points which I remember offhand\n> > that require more work.\n>\n> Thanks!\n>\n> > 1. There is a competing design proposed and being discussed in another\n> > thread [1] for this purpose. I think both the approaches have pros and\n> > cons but there doesn't seem to be any conclusion yet on which one is\n> > better.\n>\n> I was thinking that [1] was discussing global snapshot feature for\n> \"atomic visibility\" rather than the solution like 2PC for \"atomic commit\".\n> But if another approach for \"atomic commit\" was also proposed at [1],\n> that's good. I will check that.\n>\n\nOkay, that makes sense.\n\n> > 2. In this thread, we have discussed to try integrating this patch\n> > with some other FDWs (say MySQL, mongodb, etc.) to ensure that the\n> > APIs we are exposing are general enough that other FDWs can use them\n> > to implement 2PC. I could see some speculations about the same but no\n> > concrete work on the same has been done.\n>\n> Yes, you're right.\n>\n> > 3. In another thread [1], we have seen that the patch being discussed\n> > in this thread might need to re-designed if we have to use some other\n> > design for global-visibility than what is proposed in that thread. I\n> > think it is quite likely that can happen considering no one is able to\n> > come up with the solution to major design problems spotted in that\n> > patch yet.\n>\n> You imply that global-visibility patch should be come first before \"2PC\" patch?\n>\n\nI intend to say that the global-visibility work can impact this in a\nmajor way and we have analyzed that to some extent during a discussion\non the other thread. So, I think without having a complete\ndesign/solution that addresses both the 2PC and global-visibility, it\nis not apparent what is the right way to proceed. It seems to me that\nrather than working on individual (or smaller) parts one needs to come\nup with a bigger picture (or overall design) and then once we have\nfigured that out correctly, it would be easier to decide which parts\ncan go first.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 8 Sep 2020 08:33:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> I intend to say that the global-visibility work can impact this in a\r\n> major way and we have analyzed that to some extent during a discussion\r\n> on the other thread. So, I think without having a complete\r\n> design/solution that addresses both the 2PC and global-visibility, it\r\n> is not apparent what is the right way to proceed. It seems to me that\r\n> rather than working on individual (or smaller) parts one needs to come\r\n> up with a bigger picture (or overall design) and then once we have\r\n> figured that out correctly, it would be easier to decide which parts\r\n> can go first.\r\n\r\nI'm really sorry I've been getting late and late and latex10 to publish the revised scale-out design wiki to discuss the big picture! I don't know why I'm taking this long time; I feel I were captive in a time prison (yes, nobody is holding me captive; I'm just late.) Please wait a few days.\r\n\r\nBut to proceed with the development, let me comment on the atomic commit and global visibility.\r\n\r\n* We have to hear from Andrey about their check on the possibility that Clock-SI could be Microsoft's patent and if we can avoid it.\r\n\r\n* I have a feeling that we can adopt the algorithm used by Spanner, CockroachDB, and YugabyteDB. That is, 2PC for multi-node atomic commit, Paxos or Raft for replica synchronization (in the process of commit) to make 2PC more highly available, and the timestamp-based global visibility. However, the timestamp-based approach makes the database instance shut down when the node's clock is distant from the other nodes.\r\n\r\n* Or, maybe we can use the following Commitment ordering that doesn't require the timestamp or any other information to be transferred among the cluster nodes. However, this seems to have to track the order of read and write operations among concurrent transactions to ensure the correct commit order, so I'm not sure about the performance. The MVCO paper seems to present the information we need, but I haven't understood it well yet (it's difficult.) Could you anybody kindly interpret this?\r\n\r\nCommitment ordering (CO) - yoavraz2\r\nhttps://sites.google.com/site/yoavraz2/the_principle_of_co\r\n\r\n\r\nAs for the Sawada-san's 2PC patch, which I find interesting purely as FDW enhancement, I raised the following issues to be addressed:\r\n\r\n1. Make FDW API implementable by other FDWs than postgres_fdw (this is what Amit-san kindly pointed out.) I think oracle_fdw and jdbc_fdw would be good examples to consider, while MySQL may not be good because it exposes the XA feature as SQL statements, not C functions as defined in the XA specification.\r\n\r\n2. 2PC processing is queued and serialized in one background worker. That severely subdues transaction throughput. Each backend should perform 2PC.\r\n\r\n3. postgres_fdw cannot detect remote updates when the UDF executed on a remote node updates data.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n",
"msg_date": "Tue, 8 Sep 2020 04:00:45 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, 7 Sep 2020 at 17:59, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/08/21 15:25, Masahiko Sawada wrote:\n> > On Fri, 21 Aug 2020 at 00:36, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/07/27 15:59, Masahiko Sawada wrote:\n> >>> On Thu, 23 Jul 2020 at 22:51, Muhammad Usama <m.usama@gmail.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On Wed, Jul 22, 2020 at 12:42 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> >>>>>\n> >>>>> On Sat, 18 Jul 2020 at 01:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/07/16 14:47, Masahiko Sawada wrote:\n> >>>>>>> On Tue, 14 Jul 2020 at 11:19, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>> On 2020/07/14 9:08, Masahiro Ikeda wrote:\n> >>>>>>>>>> I've attached the latest version patches. I've incorporated the review\n> >>>>>>>>>> comments I got so far and improved locking strategy.\n> >>>>>>>>>\n> >>>>>>>>> Thanks for updating the patch!\n> >>>>>>>>\n> >>>>>>>> +1\n> >>>>>>>> I'm interested in these patches and now studying them. While checking\n> >>>>>>>> the behaviors of the patched PostgreSQL, I got three comments.\n> >>>>>>>\n> >>>>>>> Thank you for testing this patch!\n> >>>>>>>\n> >>>>>>>>\n> >>>>>>>> 1. We can access to the foreign table even during recovery in the HEAD.\n> >>>>>>>> But in the patched version, when I did that, I got the following error.\n> >>>>>>>> Is this intentional?\n> >>>>>>>>\n> >>>>>>>> ERROR: cannot assign TransactionIds during recovery\n> >>>>>>>\n> >>>>>>> No, it should be fixed. I'm going to fix this by not collecting\n> >>>>>>> participants for atomic commit during recovery.\n> >>>>>>\n> >>>>>> Thanks for trying to fix the issues!\n> >>>>>>\n> >>>>>> I'd like to report one more issue. When I started new transaction\n> >>>>>> in the local server, executed INSERT in the remote server via\n> >>>>>> postgres_fdw and then quit psql, I got the following assertion failure.\n> >>>>>>\n> >>>>>> TRAP: FailedAssertion(\"fdwxact\", File: \"fdwxact.c\", Line: 1570)\n> >>>>>> 0 postgres 0x000000010d52f3c0 ExceptionalCondition + 160\n> >>>>>> 1 postgres 0x000000010cefbc49 ForgetAllFdwXactParticipants + 313\n> >>>>>> 2 postgres 0x000000010cefff14 AtProcExit_FdwXact + 20\n> >>>>>> 3 postgres 0x000000010d313fe3 shmem_exit + 179\n> >>>>>> 4 postgres 0x000000010d313e7a proc_exit_prepare + 122\n> >>>>>> 5 postgres 0x000000010d313da3 proc_exit + 19\n> >>>>>> 6 postgres 0x000000010d35112f PostgresMain + 3711\n> >>>>>> 7 postgres 0x000000010d27bb3a BackendRun + 570\n> >>>>>> 8 postgres 0x000000010d27af6b BackendStartup + 475\n> >>>>>> 9 postgres 0x000000010d279ed1 ServerLoop + 593\n> >>>>>> 10 postgres 0x000000010d277940 PostmasterMain + 6016\n> >>>>>> 11 postgres 0x000000010d1597b9 main + 761\n> >>>>>> 12 libdyld.dylib 0x00007fff7161e3d5 start + 1\n> >>>>>> 13 ??? 0x0000000000000003 0x0 + 3\n> >>>>>>\n> >>>>>\n> >>>>> Thank you for reporting the issue!\n> >>>>>\n> >>>>> I've attached the latest version patch that incorporated all comments\n> >>>>> I got so far. I've removed the patch adding the 'prefer' mode of\n> >>>>> foreign_twophase_commit to keep the patch set simple.\n> >>>>\n> >>>>\n> >>>> I have started to review the patchset. Just a quick comment.\n> >>>>\n> >>>> Patch v24-0002-Support-atomic-commit-among-multiple-foreign-ser.patch\n> >>>> contains changes (adding fdwxact includes) for\n> >>>> src/backend/executor/nodeForeignscan.c, src/backend/executor/nodeModifyTable.c\n> >>>> and src/backend/executor/execPartition.c files that doesn't seem to be\n> >>>> required with the latest version.\n> >>>\n> >>> Thanks for your comment.\n> >>>\n> >>> Right. I've removed these changes on the local branch.\n> >>\n> >> The latest patches failed to be applied to the master branch. Could you rebase the patches?\n> >>\n> >\n> > Thank you for letting me know. I've attached the latest version patch set.\n>\n> Thanks for updating the patch!\n>\n> IMO it's not easy to commit this 2PC patch at once because it's still large\n> and complicated. So I'm thinking it's better to separate the feature into\n> several parts and commit them gradually. What about separating\n> the feature into the following parts?\n>\n> #1\n> Originally the server just executed xact callback that each FDW registered\n> when the transaction was committed. The patch changes this so that\n> the server manages the participants of FDW in the transaction and triggers\n> them to execute COMMIT or ROLLBACK. IMO this change can be applied\n> without 2PC feature. Thought?\n>\n> Even if we commit this patch and add new interface for FDW, we would\n> need to keep the old interface, for the FDW providing only old interface.\n>\n>\n> #2\n> Originally when there was the FDW access in the transaction,\n> PREPARE TRANSACTION on that transaction failed with an error. The patch\n> allows PREPARE TRANSACTION and COMMIT/ROLLBACK PREPARED\n> even when FDW access occurs in the transaction. IMO this change can be\n> applied without *automatic* 2PC feature (i.e., PREPARE TRANSACTION and\n> COMMIT/ROLLBACK PREPARED are automatically executed for each FDW\n> inside \"top\" COMMIT command). Thought?\n>\n> I'm not sure yet whether automatic resolution of \"unresolved\" prepared\n> transactions by the resolver process is necessary for this change or not.\n> If it's not necessary, it's better to exclude the resolver process from this\n> change, at this stage, to make the patch simpler.\n>\n>\n> #3\n> Finally IMO we can provide the patch supporting \"automatic\" 2PC for each FDW,\n> based on the #1 and #2 patches.\n>\n>\n> What's your opinion about this?\n\nRegardless of which approaches of 2PC implementation being selected\nsplitting the patch into logical small patches is a good idea and the\nabove suggestion makes sense to me.\n\nRegarding #2, I guess that we would need resolver and launcher\nprocesses even if we would support only manual PREPARE TRANSACTION and\nCOMMIT/ROLLBACK PREPARED commands:\n\nOn COMMIT PREPARED command, I think we should commit the local\nprepared transaction first then commit foreign prepared transactions.\nOtherwise, it violates atomic commit principles when the local node\nfailed to commit a foreign prepared transaction and the user changed\nto ROLLBACK PREPARED. OTOH once we committed locally, we cannot change\nto rollback. And attempting to commit foreign prepared transactions\ncould lead an error due to connection error, OOM caused by palloc etc.\nTherefore we discussed using background processes, resolver and\nlauncher, to take in charge of committing foreign prepared\ntransactions so that the process who executed COMMIT PREPARED will\nnever error out after local commit. So I think the patch #2 will have\nthe patch also adding resolver and launcher processes. And in the\npatch #3 we will change the code to support automatic 2PC as you\nsuggested.\n\nIn addition, the part of the automatic resolution of in-doubt\ntransactions can also be a separate patch, which will be the #4 patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 8 Sep 2020 14:16:17 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, Sep 7, 2020 at 2:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> #2\n> Originally when there was the FDW access in the transaction,\n> PREPARE TRANSACTION on that transaction failed with an error. The patch\n> allows PREPARE TRANSACTION and COMMIT/ROLLBACK PREPARED\n> even when FDW access occurs in the transaction. IMO this change can be\n> applied without *automatic* 2PC feature (i.e., PREPARE TRANSACTION and\n> COMMIT/ROLLBACK PREPARED are automatically executed for each FDW\n> inside \"top\" COMMIT command). Thought?\n>\n> I'm not sure yet whether automatic resolution of \"unresolved\" prepared\n> transactions by the resolver process is necessary for this change or not.\n> If it's not necessary, it's better to exclude the resolver process from this\n> change, at this stage, to make the patch simpler.\n\nI agree with this. However, in case of explicit prepare, if we are not\ngoing to try automatic resolution, it might be better to provide a way\nto pass the information about transactions prepared on the foreign\nservers if they can not be resolved at the time of commit so that the\nuser can take it up to resolve those him/herself. This was an idea\nthat Tom had suggested at the very beginning of the first take.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 8 Sep 2020 18:50:08 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/09/08 12:03, Amit Kapila wrote:\n> On Tue, Sep 8, 2020 at 8:05 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/09/08 10:34, Amit Kapila wrote:\n>>> On Mon, Sep 7, 2020 at 2:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>> IMO it's not easy to commit this 2PC patch at once because it's still large\n>>>> and complicated. So I'm thinking it's better to separate the feature into\n>>>> several parts and commit them gradually.\n>>>>\n>>>\n>>> Hmm, I don't see that we have a consensus on the design and or\n>>> interfaces of this patch and without that proceeding for commit\n>>> doesn't seem advisable. Here are a few points which I remember offhand\n>>> that require more work.\n>>\n>> Thanks!\n>>\n>>> 1. There is a competing design proposed and being discussed in another\n>>> thread [1] for this purpose. I think both the approaches have pros and\n>>> cons but there doesn't seem to be any conclusion yet on which one is\n>>> better.\n>>\n>> I was thinking that [1] was discussing global snapshot feature for\n>> \"atomic visibility\" rather than the solution like 2PC for \"atomic commit\".\n>> But if another approach for \"atomic commit\" was also proposed at [1],\n>> that's good. I will check that.\n>>\n> \n> Okay, that makes sense.\n\nI read Alexey's 2PC patch (0001-Add-postgres_fdw.use_twophase-GUC-to-use-2PC.patch)\nproposed at [1]. As Alexey told at that thread, there are two big differences\nbetween his patch and Sawada-san's; 1) whether there is the resolver process\nfor foreign transactions, 2) 2PC logic is implemented only inside postgres_fdw\nor both FDW and PostgreSQL core.\n\nI think that 2) is the first decision point. Alexey's 2PC patch is very simple\nand all the 2PC logic is implemented only inside postgres_fdw. But this\nmeans that 2PC is not usable if multiple types of FDW (e.g., postgres_fdw\nand mysql_fdw) participate at the transaction. This may be ok if we implement\n2PC feature only for PostgreSQL sharding using postgres_fdw. But if we\nimplement 2PC as the improvement on FDW independently from PostgreSQL\nsharding, I think that it's necessary to support other FDW. And this is our\ndirection, isn't it?\n\nSawada-san's patch supports that case by implememnting some conponents\nfor that also in PostgreSQL core. For example, with the patch, all the remote\ntransactions that participate at the transaction are managed by PostgreSQL\ncore instead of postgres_fdw layer.\n\nTherefore, at least regarding the difference 2), I think that Sawada-san's\napproach is better. Thought?\n\n[1]\nhttps://postgr.es/m/3ef7877bfed0582019eab3d462a43275@postgrespro.ru\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 10 Sep 2020 03:12:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Alexey-san, Sawada-san,\r\ncc: Fujii-san,\r\n\r\n\r\nFrom: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> But if we\r\n> implement 2PC as the improvement on FDW independently from PostgreSQL\r\n> sharding, I think that it's necessary to support other FDW. And this is our\r\n> direction, isn't it?\r\n\r\nI understand the same way as Fujii san. 2PC FDW is itself useful, so I think we should pursue the tidy FDW interface and good performance withinn the FDW framework. \"tidy\" means that many other FDWs should be able to implement it. I guess XA/JTA is the only material we can use to consider whether the FDW interface is good.\r\n\r\n\r\n> Sawada-san's patch supports that case by implememnting some conponents\r\n> for that also in PostgreSQL core. For example, with the patch, all the remote\r\n> transactions that participate at the transaction are managed by PostgreSQL\r\n> core instead of postgres_fdw layer.\r\n> \r\n> Therefore, at least regarding the difference 2), I think that Sawada-san's\r\n> approach is better. Thought?\r\n\r\nI think so. Sawada-san's patch needs to address the design issues I posed before digging into the code for thorough review, though.\r\n\r\nBTW, is there something Sawada-san can take from Alexey-san's patch? I'm concerned about the performance for practical use. Do you two have differences in these points, for instance? The first two items are often cited to evaluate the algorithm's performance, as you know.\r\n\r\n* The number of round trips to remote nodes.\r\n* The number of disk I/Os on each node and all nodes in total (WAL, two-phase file, pg_subtrans file, CLOG?).\r\n* Are prepare and commit executed in parallel on remote nodes? (serious DBMSs do so)\r\n* Is there any serialization point in the processing? (Sawada-san's has one)\r\n\r\nI'm sorry to repeat myself, but I don't think we can compromise the 2PC performance. Of course, we recommend users to design a schema that co-locates data that each transaction accesses to avoid 2PC, but it's not always possible (e.g., when secondary indexes are used.)\r\n\r\nPlus, as the following quote from TPC-C specification shows, TPC-C requires 15% of (Payment?) transactions to do 2PC. (I knew this on Microsoft, CockroachDB, or Citus Data's site.)\r\n\r\n\r\n--------------------------------------------------\r\nIndependent of the mode of selection, the customer resident \r\nwarehouse is the home warehouse 85% of the time and is a randomly selected remote warehouse 15% of the time. \r\nThis can be implemented by generating two random numbers x and y within [1 .. 100]; \r\n\r\n. If x <= 85 a customer is selected from the selected district number (C_D_ID = D_ID) and the home warehouse \r\nnumber (C_W_ID = W_ID). The customer is paying through his/her own warehouse. \r\n\r\n. If x > 85 a customer is selected from a random district number (C_D_ID is randomly selected within [1 .. 10]), \r\nand a random remote warehouse number (C_W_ID is randomly selected within the range of active \r\nwarehouses (see Clause 4.2.2), and C_W_ID ≠ W_ID). The customer is paying through a warehouse and a \r\ndistrict other than his/her own. \r\n--------------------------------------------------\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Thu, 10 Sep 2020 01:13:08 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/09/10 10:13, tsunakawa.takay@fujitsu.com wrote:\n> Alexey-san, Sawada-san,\n> cc: Fujii-san,\n> \n> \n> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>> But if we\n>> implement 2PC as the improvement on FDW independently from PostgreSQL\n>> sharding, I think that it's necessary to support other FDW. And this is our\n>> direction, isn't it?\n> \n> I understand the same way as Fujii san. 2PC FDW is itself useful, so I think we should pursue the tidy FDW interface and good performance withinn the FDW framework. \"tidy\" means that many other FDWs should be able to implement it. I guess XA/JTA is the only material we can use to consider whether the FDW interface is good.\n\nOriginally start(), commit() and rollback() are supported as FDW interfaces. With his patch, prepare() is supported. What other interfaces need to be supported per XA/JTA?\n\nAs far as I and Sawada-san discussed this upthread, to support MySQL, another type of start() would be necessary to issue \"XA START id\" command. end() might be also necessary to issue \"XA END id\", but that command can be issued via prepare() together with \"XA PREPARE id\".\n\nI'm not familiar with XA/JTA and XA transaction interfaces on other major DBMS. So I'd like to know what other interfaces are necessary additionally?\n\n> \n> \n>> Sawada-san's patch supports that case by implememnting some conponents\n>> for that also in PostgreSQL core. For example, with the patch, all the remote\n>> transactions that participate at the transaction are managed by PostgreSQL\n>> core instead of postgres_fdw layer.\n>>\n>> Therefore, at least regarding the difference 2), I think that Sawada-san's\n>> approach is better. Thought?\n> \n> I think so. Sawada-san's patch needs to address the design issues I posed before digging into the code for thorough review, though.\n> \n> BTW, is there something Sawada-san can take from Alexey-san's patch? I'm concerned about the performance for practical use. Do you two have differences in these points, for instance?\n\nIMO Sawada-san's version of 2PC is less performant, but it's because\nhis patch provides more functionality. For example, with his patch,\nWAL is written to automatically complete the unresolve foreign transactions\nin the case of failure. OTOH, Alexey patch introduces no new WAL for 2PC.\nOf course, generating more WAL would cause more overhead.\nBut if we need automatic resolution feature, it's inevitable to introduce\nnew WAL whichever the patch we choose.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 10 Sep 2020 20:16:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 8 Sep 2020 at 13:00, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > I intend to say that the global-visibility work can impact this in a\n> > major way and we have analyzed that to some extent during a discussion\n> > on the other thread. So, I think without having a complete\n> > design/solution that addresses both the 2PC and global-visibility, it\n> > is not apparent what is the right way to proceed. It seems to me that\n> > rather than working on individual (or smaller) parts one needs to come\n> > up with a bigger picture (or overall design) and then once we have\n> > figured that out correctly, it would be easier to decide which parts\n> > can go first.\n>\n> I'm really sorry I've been getting late and late and latex10 to publish the revised scale-out design wiki to discuss the big picture! I don't know why I'm taking this long time; I feel I were captive in a time prison (yes, nobody is holding me captive; I'm just late.) Please wait a few days.\n>\n> But to proceed with the development, let me comment on the atomic commit and global visibility.\n>\n> * We have to hear from Andrey about their check on the possibility that Clock-SI could be Microsoft's patent and if we can avoid it.\n>\n> * I have a feeling that we can adopt the algorithm used by Spanner, CockroachDB, and YugabyteDB. That is, 2PC for multi-node atomic commit, Paxos or Raft for replica synchronization (in the process of commit) to make 2PC more highly available, and the timestamp-based global visibility. However, the timestamp-based approach makes the database instance shut down when the node's clock is distant from the other nodes.\n>\n> * Or, maybe we can use the following Commitment ordering that doesn't require the timestamp or any other information to be transferred among the cluster nodes. However, this seems to have to track the order of read and write operations among concurrent transactions to ensure the correct commit order, so I'm not sure about the performance. The MVCO paper seems to present the information we need, but I haven't understood it well yet (it's difficult.) Could you anybody kindly interpret this?\n>\n> Commitment ordering (CO) - yoavraz2\n> https://sites.google.com/site/yoavraz2/the_principle_of_co\n>\n>\n> As for the Sawada-san's 2PC patch, which I find interesting purely as FDW enhancement, I raised the following issues to be addressed:\n>\n> 1. Make FDW API implementable by other FDWs than postgres_fdw (this is what Amit-san kindly pointed out.) I think oracle_fdw and jdbc_fdw would be good examples to consider, while MySQL may not be good because it exposes the XA feature as SQL statements, not C functions as defined in the XA specification.\n\nI agree that we need to verify new FDW APIs will be suitable for other\nFDWs than postgres_fdw as well.\n\n>\n> 2. 2PC processing is queued and serialized in one background worker. That severely subdues transaction throughput. Each backend should perform 2PC.\n\nNot sure it's safe that each backend perform PREPARE and COMMIT\nPREPARED since the current design is for not leading an inconsistency\nbetween the actual transaction result and the result the user sees.\nBut in the future, I think we can have multiple background workers per\ndatabase for better performance.\n\n>\n> 3. postgres_fdw cannot detect remote updates when the UDF executed on a remote node updates data.\n\nI assume that you mean the pushing the UDF down to a foreign server.\nIf so, I think we can do this by improving postgres_fdw. In the\ncurrent patch, registering and unregistering a foreign server to a\ngroup of 2PC and marking a foreign server as updated is FDW\nresponsible. So perhaps if we had a way to tell postgres_fdw that the\nUDF might update the data on the foreign server, postgres_fdw could\nmark the foreign server as updated if the UDF is shippable.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 11 Sep 2020 00:37:03 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2020/09/11 0:37, Masahiko Sawada wrote:\n> On Tue, 8 Sep 2020 at 13:00, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n>>\n>> From: Amit Kapila <amit.kapila16@gmail.com>\n>>> I intend to say that the global-visibility work can impact this in a\n>>> major way and we have analyzed that to some extent during a discussion\n>>> on the other thread. So, I think without having a complete\n>>> design/solution that addresses both the 2PC and global-visibility, it\n>>> is not apparent what is the right way to proceed. It seems to me that\n>>> rather than working on individual (or smaller) parts one needs to come\n>>> up with a bigger picture (or overall design) and then once we have\n>>> figured that out correctly, it would be easier to decide which parts\n>>> can go first.\n>>\n>> I'm really sorry I've been getting late and late and latex10 to publish the revised scale-out design wiki to discuss the big picture! I don't know why I'm taking this long time; I feel I were captive in a time prison (yes, nobody is holding me captive; I'm just late.) Please wait a few days.\n>>\n>> But to proceed with the development, let me comment on the atomic commit and global visibility.\n>>\n>> * We have to hear from Andrey about their check on the possibility that Clock-SI could be Microsoft's patent and if we can avoid it.\n>>\n>> * I have a feeling that we can adopt the algorithm used by Spanner, CockroachDB, and YugabyteDB. That is, 2PC for multi-node atomic commit, Paxos or Raft for replica synchronization (in the process of commit) to make 2PC more highly available, and the timestamp-based global visibility. However, the timestamp-based approach makes the database instance shut down when the node's clock is distant from the other nodes.\n>>\n>> * Or, maybe we can use the following Commitment ordering that doesn't require the timestamp or any other information to be transferred among the cluster nodes. However, this seems to have to track the order of read and write operations among concurrent transactions to ensure the correct commit order, so I'm not sure about the performance. The MVCO paper seems to present the information we need, but I haven't understood it well yet (it's difficult.) Could you anybody kindly interpret this?\n>>\n>> Commitment ordering (CO) - yoavraz2\n>> https://sites.google.com/site/yoavraz2/the_principle_of_co\n>>\n>>\n>> As for the Sawada-san's 2PC patch, which I find interesting purely as FDW enhancement, I raised the following issues to be addressed:\n>>\n>> 1. Make FDW API implementable by other FDWs than postgres_fdw (this is what Amit-san kindly pointed out.) I think oracle_fdw and jdbc_fdw would be good examples to consider, while MySQL may not be good because it exposes the XA feature as SQL statements, not C functions as defined in the XA specification.\n> \n> I agree that we need to verify new FDW APIs will be suitable for other\n> FDWs than postgres_fdw as well.\n> \n>>\n>> 2. 2PC processing is queued and serialized in one background worker. That severely subdues transaction throughput. Each backend should perform 2PC.\n> \n> Not sure it's safe that each backend perform PREPARE and COMMIT\n> PREPARED since the current design is for not leading an inconsistency\n> between the actual transaction result and the result the user sees.\n\nCan I check my understanding about why the resolver process is necessary?\n\nFirstly, you think that issuing COMMIT PREPARED command to the foreign server can cause an error, for example, because of connection error, OOM, etc. On the other hand, only waiting for other process to issue the command is less likely to cause an error. Right?\n\nIf an error occurs in backend process after commit record is WAL-logged, the error would be reported to the client and it may misunderstand that the transaction failed even though commit record was already flushed. So you think that each backend should not issue COMMIT PREPARED command to avoid that inconsistency. To avoid that, it's better to make other process, the resolver, issue the command and just make each backend wait for that to completed. Right?\n\nAlso using the resolver process has another merit; when there are unresolved foreign transactions but the corresponding backend exits, the resolver can try to resolve them. If something like this automatic resolution is necessary, the process like the resolver would be necessary. Right?\n\nTo the contrary, if we don't need such automatic resolution (i.e., unresolved foreign transactions always need to be resolved manually) and we can prevent the code to issue COMMIT PREPARED command from causing an error (not sure if that's possible, though...), probably we don't need the resolver process. Right?\n\n\n> But in the future, I think we can have multiple background workers per\n> database for better performance.\n\nYes, that's an idea.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 11 Sep 2020 11:58:24 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\tFrom: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> Originally start(), commit() and rollback() are supported as FDW interfaces.\r\n> As far as I and Sawada-san discussed this upthread, to support MySQL,\r\n> another type of start() would be necessary to issue \"XA START id\" command.\r\n> end() might be also necessary to issue \"XA END id\", but that command can be\r\n> issued via prepare() together with \"XA PREPARE id\".\r\n\r\nYeah, I think we can call xa_end and xa_prepare in the FDW's prepare function.\r\n\r\nThe issue is when to call xa_start, which requires XID as an argument. We don't want to call it in transactions that access only one node...?\r\n\r\n\r\n> With his patch, prepare() is supported. What other interfaces need to be\r\n> supported per XA/JTA?\r\n> \r\n> I'm not familiar with XA/JTA and XA transaction interfaces on other major\r\n> DBMS. So I'd like to know what other interfaces are necessary additionally?\r\n\r\nI think xa_start, xa_end, xa_prepare, xa_commit, xa_rollback, and xa_recover are sufficient. The XA specification is here:\r\n\r\nhttps://pubs.opengroup.org/onlinepubs/009680699/toc.pdf\r\n\r\nYou can see the function reference in Chapter 5, and the concept in Chapter 3. Chapter 6 was probably showing the state transition (function call sequence.)\r\n\r\n\r\n> IMO Sawada-san's version of 2PC is less performant, but it's because his\r\n> patch provides more functionality. For example, with his patch, WAL is written\r\n> to automatically complete the unresolve foreign transactions in the case of\r\n> failure. OTOH, Alexey patch introduces no new WAL for 2PC.\r\n> Of course, generating more WAL would cause more overhead.\r\n> But if we need automatic resolution feature, it's inevitable to introduce new\r\n> WAL whichever the patch we choose.\r\n\r\nPlease do not get me wrong. I know Sawada-san is trying to ensure durability. I just wanted to know what each patch does in how much cost in terms of disk and network I/Os, and if one patch can take something from another for less cost. I'm simply guessing (without having read the code yet) that each transaction basically does:\r\n\r\n- two round trips (prepare, commit) to each remote node\r\n- two WAL writes (prepare, commit) on the local node and each remote node\r\n- one write for two-phase state file on each remote node\r\n- one write to record participants on the local node\r\n\r\nIt felt hard to think about the algorithm efficiency from the source code. As you may have seen, the DBMS textbook and/or papers describe disk and network I/Os to evaluate algorithms. I thought such information would be useful before going deeper into the source code. Maybe such things can be written in the following Sawada-san's wiki or README in the end.\r\n\r\nAtomic Commit of Distributed Transactions\r\nhttps://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Fri, 11 Sep 2020 08:15:08 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> On Tue, 8 Sep 2020 at 13:00, tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > 2. 2PC processing is queued and serialized in one background worker. That\r\n> severely subdues transaction throughput. Each backend should perform\r\n> 2PC.\r\n> \r\n> Not sure it's safe that each backend perform PREPARE and COMMIT\r\n> PREPARED since the current design is for not leading an inconsistency\r\n> between the actual transaction result and the result the user sees.\r\n\r\nAs Fujii-san is asking, I also would like to know what situation you think is not safe. Are you worried that the FDW's commit function might call ereport(ERROR | FATAL | PANIC)? If so, can't we stipulate that the FDW implementor should ensure that the commit function always returns control to the caller?\r\n\r\n\r\n> But in the future, I think we can have multiple background workers per\r\n> database for better performance.\r\n\r\nDoes the database in \"per database\" mean the local database (that applications connect to), or the remote database accessed via FDW?\r\n\r\nI'm wondering how the FDW and background worker(s) can realize parallel prepare and parallel commit. That is, the coordinator transaction performs:\r\n\r\n1. Issue prepare to all participant nodes, but doesn't wait for the reply for each issue.\r\n2. Waits for replies from all participants.\r\n3. Issue commit to all participant nodes, but doesn't wait for the reply for each issue.\r\n4. Waits for replies from all participants.\r\n\r\nIf we just consider PostgreSQL and don't think about FDW, we can use libpq async functions -- PQsendQuery, PQconsumeInput, and PQgetResult. pgbench uses them so that one thread can issue SQL statements on multiple connections in parallel.\r\n\r\nBut when we consider the FDW interface, plus other DBMSs, how can we achieve the parallelism?\r\n\r\n\r\n> > 3. postgres_fdw cannot detect remote updates when the UDF executed on a\r\n> remote node updates data.\r\n> \r\n> I assume that you mean the pushing the UDF down to a foreign server.\r\n> If so, I think we can do this by improving postgres_fdw. In the current patch,\r\n> registering and unregistering a foreign server to a group of 2PC and marking a\r\n> foreign server as updated is FDW responsible. So perhaps if we had a way to\r\n> tell postgres_fdw that the UDF might update the data on the foreign server,\r\n> postgres_fdw could mark the foreign server as updated if the UDF is shippable.\r\n\r\nMaybe we can consider VOLATILE functions update data. That may be overreaction, though.\r\n\r\nAnother idea is to add a new value to the ReadyForQuery message in the FE/BE protocol. Say, 'U' if in a transaction block that updated data. Here we consider \"updated\" as having allocated an XID.\r\n\r\n52.7. Message Formats\r\nhttps://www.postgresql.org/docs/devel/protocol-message-formats.html\r\n--------------------------------------------------\r\nReadyForQuery (B)\r\n\r\nByte1\r\nCurrent backend transaction status indicator. Possible values are 'I' if idle (not in a transaction block); 'T' if in a transaction block; or 'E' if in a failed transaction block (queries will be rejected until block is ended).\r\n--------------------------------------------------\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Fri, 11 Sep 2020 09:24:00 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 11 Sep 2020 at 11:58, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/09/11 0:37, Masahiko Sawada wrote:\n> > On Tue, 8 Sep 2020 at 13:00, tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> >>\n> >> From: Amit Kapila <amit.kapila16@gmail.com>\n> >>> I intend to say that the global-visibility work can impact this in a\n> >>> major way and we have analyzed that to some extent during a discussion\n> >>> on the other thread. So, I think without having a complete\n> >>> design/solution that addresses both the 2PC and global-visibility, it\n> >>> is not apparent what is the right way to proceed. It seems to me that\n> >>> rather than working on individual (or smaller) parts one needs to come\n> >>> up with a bigger picture (or overall design) and then once we have\n> >>> figured that out correctly, it would be easier to decide which parts\n> >>> can go first.\n> >>\n> >> I'm really sorry I've been getting late and late and latex10 to publish the revised scale-out design wiki to discuss the big picture! I don't know why I'm taking this long time; I feel I were captive in a time prison (yes, nobody is holding me captive; I'm just late.) Please wait a few days.\n> >>\n> >> But to proceed with the development, let me comment on the atomic commit and global visibility.\n> >>\n> >> * We have to hear from Andrey about their check on the possibility that Clock-SI could be Microsoft's patent and if we can avoid it.\n> >>\n> >> * I have a feeling that we can adopt the algorithm used by Spanner, CockroachDB, and YugabyteDB. That is, 2PC for multi-node atomic commit, Paxos or Raft for replica synchronization (in the process of commit) to make 2PC more highly available, and the timestamp-based global visibility. However, the timestamp-based approach makes the database instance shut down when the node's clock is distant from the other nodes.\n> >>\n> >> * Or, maybe we can use the following Commitment ordering that doesn't require the timestamp or any other information to be transferred among the cluster nodes. However, this seems to have to track the order of read and write operations among concurrent transactions to ensure the correct commit order, so I'm not sure about the performance. The MVCO paper seems to present the information we need, but I haven't understood it well yet (it's difficult.) Could you anybody kindly interpret this?\n> >>\n> >> Commitment ordering (CO) - yoavraz2\n> >> https://sites.google.com/site/yoavraz2/the_principle_of_co\n> >>\n> >>\n> >> As for the Sawada-san's 2PC patch, which I find interesting purely as FDW enhancement, I raised the following issues to be addressed:\n> >>\n> >> 1. Make FDW API implementable by other FDWs than postgres_fdw (this is what Amit-san kindly pointed out.) I think oracle_fdw and jdbc_fdw would be good examples to consider, while MySQL may not be good because it exposes the XA feature as SQL statements, not C functions as defined in the XA specification.\n> >\n> > I agree that we need to verify new FDW APIs will be suitable for other\n> > FDWs than postgres_fdw as well.\n> >\n> >>\n> >> 2. 2PC processing is queued and serialized in one background worker. That severely subdues transaction throughput. Each backend should perform 2PC.\n> >\n> > Not sure it's safe that each backend perform PREPARE and COMMIT\n> > PREPARED since the current design is for not leading an inconsistency\n> > between the actual transaction result and the result the user sees.\n>\n> Can I check my understanding about why the resolver process is necessary?\n>\n> Firstly, you think that issuing COMMIT PREPARED command to the foreign server can cause an error, for example, because of connection error, OOM, etc. On the other hand, only waiting for other process to issue the command is less likely to cause an error. Right?\n>\n> If an error occurs in backend process after commit record is WAL-logged, the error would be reported to the client and it may misunderstand that the transaction failed even though commit record was already flushed. So you think that each backend should not issue COMMIT PREPARED command to avoid that inconsistency. To avoid that, it's better to make other process, the resolver, issue the command and just make each backend wait for that to completed. Right?\n>\n> Also using the resolver process has another merit; when there are unresolved foreign transactions but the corresponding backend exits, the resolver can try to resolve them. If something like this automatic resolution is necessary, the process like the resolver would be necessary. Right?\n>\n> To the contrary, if we don't need such automatic resolution (i.e., unresolved foreign transactions always need to be resolved manually) and we can prevent the code to issue COMMIT PREPARED command from causing an error (not sure if that's possible, though...), probably we don't need the resolver process. Right?\n\nYes, I'm on the same page about all the above explanations.\n\nThe resolver process has two functionalities: resolving foreign\ntransactions automatically when the user issues COMMIT (the case you\ndescribed in the second paragraph), and resolving foreign transaction\nwhen the corresponding backend no longer exist or when the server\ncrashes during in the middle of 2PC (described in the third\nparagraph).\n\nConsidering the design without the resolver process, I think we can\neasily replace the latter with the manual resolution. OTOH, it's not\neasy for the former. I have no idea about better design for now,\nalthough, as you described, if we could ensure that the process\ndoesn't raise an error during resolving foreign transactions after\ncommitting the local transaction we would not need the resolver\nprocess.\n\nOr the second idea would be that the backend commits only the local\ntransaction then returns the acknowledgment of COMMIT to the user\nwithout resolving foreign transactions. Then the user manually\nresolves the foreign transactions by, for example, using the SQL\nfunction pg_resolve_foreign_xact() within a separate transaction. That\nway, even if an error occurred during resolving foreign transactions\n(i.g., executing COMMIT PREPARED), it’s okay as the user is already\naware of the local transaction having been committed and can retry to\nresolve the unresolved foreign transaction. So we won't need the\nresolver process while avoiding such inconsistency.\n\nBut a drawback would be that the transaction commit doesn't ensure\nthat all foreign transactions are completed. The subsequent\ntransactions would need to check if the previous distributed\ntransaction is completed to see its results. I’m not sure it’s a good\ndesign in terms of usability.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 11 Sep 2020 20:06:52 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 11 Sep 2020 at 18:24, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > On Tue, 8 Sep 2020 at 13:00, tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > > 2. 2PC processing is queued and serialized in one background worker. That\n> > severely subdues transaction throughput. Each backend should perform\n> > 2PC.\n> >\n> > Not sure it's safe that each backend perform PREPARE and COMMIT\n> > PREPARED since the current design is for not leading an inconsistency\n> > between the actual transaction result and the result the user sees.\n>\n> As Fujii-san is asking, I also would like to know what situation you think is not safe. Are you worried that the FDW's commit function might call ereport(ERROR | FATAL | PANIC)?\n\nYes.\n\n> If so, can't we stipulate that the FDW implementor should ensure that the commit function always returns control to the caller?\n\nHow can the FDW implementor ensure that? Since even palloc could call\nereport(ERROR) I guess it's hard to require that to all FDW\nimplementors.\n\n>\n>\n> > But in the future, I think we can have multiple background workers per\n> > database for better performance.\n>\n> Does the database in \"per database\" mean the local database (that applications connect to), or the remote database accessed via FDW?\n\nI meant the local database. In the current patch, we launch the\nresolver process per local database. My idea is to allow launching\nmultiple resolver processes for one local database as long as the\nnumber of workers doesn't exceed the limit.\n\n>\n> I'm wondering how the FDW and background worker(s) can realize parallel prepare and parallel commit. That is, the coordinator transaction performs:\n>\n> 1. Issue prepare to all participant nodes, but doesn't wait for the reply for each issue.\n> 2. Waits for replies from all participants.\n> 3. Issue commit to all participant nodes, but doesn't wait for the reply for each issue.\n> 4. Waits for replies from all participants.\n>\n> If we just consider PostgreSQL and don't think about FDW, we can use libpq async functions -- PQsendQuery, PQconsumeInput, and PQgetResult. pgbench uses them so that one thread can issue SQL statements on multiple connections in parallel.\n>\n> But when we consider the FDW interface, plus other DBMSs, how can we achieve the parallelism?\n\nIt's still a rough idea but I think we can use TMASYNC flag and\nxa_complete explained in the XA specification. The core transaction\nmanager call prepare, commit, rollback APIs with the flag, requiring\nto execute the operation asynchronously and to return a handler (e.g.,\na socket taken by PQsocket in postgres_fdw case) to the transaction\nmanager. Then the transaction manager continues polling the handler\nuntil it becomes readable and testing the completion using by\nxa_complete() with no wait, until all foreign servers return OK on\nxa_complete check.\n\n>\n>\n> > > 3. postgres_fdw cannot detect remote updates when the UDF executed on a\n> > remote node updates data.\n> >\n> > I assume that you mean the pushing the UDF down to a foreign server.\n> > If so, I think we can do this by improving postgres_fdw. In the current patch,\n> > registering and unregistering a foreign server to a group of 2PC and marking a\n> > foreign server as updated is FDW responsible. So perhaps if we had a way to\n> > tell postgres_fdw that the UDF might update the data on the foreign server,\n> > postgres_fdw could mark the foreign server as updated if the UDF is shippable.\n>\n> Maybe we can consider VOLATILE functions update data. That may be overreaction, though.\n\nSorry I don't understand that. The volatile functions are not pushed\ndown to the foreign servers in the first place, no?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 13 Sep 2020 17:36:09 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Sep 11, 2020 at 4:37 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Considering the design without the resolver process, I think we can\n> easily replace the latter with the manual resolution. OTOH, it's not\n> easy for the former. I have no idea about better design for now,\n> although, as you described, if we could ensure that the process\n> doesn't raise an error during resolving foreign transactions after\n> committing the local transaction we would not need the resolver\n> process.\n\nMy initial patch used the same backend to resolve foreign\ntransactions. But in that case even though the user receives COMMIT\ncompleted, the backend isn't accepting the next query till it is busy\nresolving the foreign server. That might be a usability issue again if\nattempting to resolve all foreign transactions takes noticeable time.\nIf we go this route, we should try to resolve as many foreign\ntransactions as possible ignoring any errors while doing so and\nsomehow let user know which transactions couldn't be resolved. User\ncan then take responsibility for resolving those.\n\n>\n> Or the second idea would be that the backend commits only the local\n> transaction then returns the acknowledgment of COMMIT to the user\n> without resolving foreign transactions. Then the user manually\n> resolves the foreign transactions by, for example, using the SQL\n> function pg_resolve_foreign_xact() within a separate transaction. That\n> way, even if an error occurred during resolving foreign transactions\n> (i.g., executing COMMIT PREPARED), it’s okay as the user is already\n> aware of the local transaction having been committed and can retry to\n> resolve the unresolved foreign transaction. So we won't need the\n> resolver process while avoiding such inconsistency.\n>\n> But a drawback would be that the transaction commit doesn't ensure\n> that all foreign transactions are completed. The subsequent\n> transactions would need to check if the previous distributed\n> transaction is completed to see its results. I’m not sure it’s a good\n> design in terms of usability.\n\nI agree, this won't be acceptable.\n\nIn either case, I think a solution where the local server takes\nresponsibility to resolve foreign transactions will be better even in\nthe first cut.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 15 Sep 2020 18:54:44 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> > If so, can't we stipulate that the FDW implementor should ensure that the\r\n> commit function always returns control to the caller?\r\n> \r\n> How can the FDW implementor ensure that? Since even palloc could call\r\n> ereport(ERROR) I guess it's hard to require that to all FDW\r\n> implementors.\r\n\r\nI think the what FDW commit routine will do is to just call xa_commit(), or PQexec(\"COMMIT PREPARED\") in postgres_fdw.\r\n\r\n\r\n> It's still a rough idea but I think we can use TMASYNC flag and\r\n> xa_complete explained in the XA specification. The core transaction\r\n> manager call prepare, commit, rollback APIs with the flag, requiring\r\n> to execute the operation asynchronously and to return a handler (e.g.,\r\n> a socket taken by PQsocket in postgres_fdw case) to the transaction\r\n> manager. Then the transaction manager continues polling the handler\r\n> until it becomes readable and testing the completion using by\r\n> xa_complete() with no wait, until all foreign servers return OK on\r\n> xa_complete check.\r\n\r\nUnfortunately, even Oracle and Db2 don't support XA asynchronous execution for years. Our DBMS Symfoware doesn't, either. I don't expect other DBMSs support it.\r\n\r\nHmm, I'm afraid this may be one of the FDW's intractable walls for a serious scale-out DBMS. If we define asynchronous FDW routines for 2PC, postgres_fdw would be able to implement them by using libpq asynchronous functions. But other DBMSs can't ...\r\n\r\n\r\n> > Maybe we can consider VOLATILE functions update data. That may be\r\n> overreaction, though.\r\n> \r\n> Sorry I don't understand that. The volatile functions are not pushed\r\n> down to the foreign servers in the first place, no?\r\n\r\nAh, you're right. Then, the choices are twofold: (1) trust users in that their functions don't update data or the user's claim (specification) about it, and (2) get notification through FE/BE protocol that the remote transaction may have updated data.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 16 Sep 2020 04:20:41 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> The resolver process has two functionalities: resolving foreign\r\n> transactions automatically when the user issues COMMIT (the case you\r\n> described in the second paragraph), and resolving foreign transaction\r\n> when the corresponding backend no longer exist or when the server\r\n> crashes during in the middle of 2PC (described in the third\r\n> paragraph).\r\n> \r\n> Considering the design without the resolver process, I think we can\r\n> easily replace the latter with the manual resolution. OTOH, it's not\r\n> easy for the former. I have no idea about better design for now,\r\n> although, as you described, if we could ensure that the process\r\n> doesn't raise an error during resolving foreign transactions after\r\n> committing the local transaction we would not need the resolver\r\n> process.\r\n\r\nYeah, the resolver background process -- someone independent of client sessions -- is necessary, because the client session disappears sometime. When the server that hosts the 2PC coordinator crashes, there are no client sessions. Our DBMS Symfoware also runs background threads that take care of resolution of in-doubt transactions due to a server or network failure.\r\n\r\nThen, how does the resolver get involved in 2PC to enable parallel 2PC? Two ideas quickly come to mind:\r\n\r\n(1) Each client backend issues prepare and commit to multiple remote nodes asynchronously.\r\nIf the communication fails during commit, the client backend leaves the commit notification task to the resolver.\r\nThat is, the resolver lends a hand during failure recovery, and doesn't interfere with the transaction processing during normal operation.\r\n\r\n(2) The resolver takes some responsibility in 2PC processing during normal operation.\r\n(send prepare and/or commit to remote nodes and get the results.)\r\nTo avoid serial execution per transaction, the resolver bundles multiple requests, send them in bulk, and wait for multiple replies at once.\r\nThis allows the coordinator to do its own prepare processing in parallel with those of participants.\r\nHowever, in Postgres, this requires context switches between the client backend and the resolver.\r\n\r\n\r\nOur Symfoware takes (2). However, it doesn't suffer from the context switch, because the server is multi-threaded and further implements or uses more lightweight entities than the thread.\r\n\r\n\r\n> Or the second idea would be that the backend commits only the local\r\n> transaction then returns the acknowledgment of COMMIT to the user\r\n> without resolving foreign transactions. Then the user manually\r\n> resolves the foreign transactions by, for example, using the SQL\r\n> function pg_resolve_foreign_xact() within a separate transaction. That\r\n> way, even if an error occurred during resolving foreign transactions\r\n> (i.g., executing COMMIT PREPARED), it’s okay as the user is already\r\n> aware of the local transaction having been committed and can retry to\r\n> resolve the unresolved foreign transaction. So we won't need the\r\n> resolver process while avoiding such inconsistency.\r\n> \r\n> But a drawback would be that the transaction commit doesn't ensure\r\n> that all foreign transactions are completed. The subsequent\r\n> transactions would need to check if the previous distributed\r\n> transaction is completed to see its results. I’m not sure it’s a good\r\n> design in terms of usability.\r\n\r\nI don't think it's a good design as you are worried. I guess that's why Postgres-XL had to create a tool called pgxc_clean and ask the user to resolve transactions with it.\r\n\r\npgxc_clean\r\nhttps://www.postgres-xl.org/documentation/pgxcclean.html\r\n\r\n\"pgxc_clean is a Postgres-XL utility to maintain transaction status after a crash. When a Postgres-XL node crashes and recovers or fails over, the commit status of the node may be inconsistent with other nodes. pgxc_clean checks transaction commit status and corrects them.\"\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 16 Sep 2020 05:52:49 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 03:25:29PM +0900, Masahiko Sawada wrote:\n> Thank you for letting me know. I've attached the latest version patch set.\n\nA rebase is needed again as the CF bot is complaining.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 14:25:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 17 Sep 2020 at 14:25, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Aug 21, 2020 at 03:25:29PM +0900, Masahiko Sawada wrote:\n> > Thank you for letting me know. I've attached the latest version patch set.\n>\n> A rebase is needed again as the CF bot is complaining.\n\nThank you for letting me know. I'm updating the patch and splitting\ninto small pieces as Fujii-san suggested. I'll submit the latest patch\nset early next week.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 18 Sep 2020 17:00:00 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, 16 Sep 2020 at 13:20, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > If so, can't we stipulate that the FDW implementor should ensure that the\n> > commit function always returns control to the caller?\n> >\n> > How can the FDW implementor ensure that? Since even palloc could call\n> > ereport(ERROR) I guess it's hard to require that to all FDW\n> > implementors.\n>\n> I think the what FDW commit routine will do is to just call xa_commit(), or PQexec(\"COMMIT PREPARED\") in postgres_fdw.\n\nYes, but it still seems hard to me that we require for all FDW\nimplementations to commit/rollback prepared transactions without the\npossibility of ERROR.\n\n>\n>\n> > It's still a rough idea but I think we can use TMASYNC flag and\n> > xa_complete explained in the XA specification. The core transaction\n> > manager call prepare, commit, rollback APIs with the flag, requiring\n> > to execute the operation asynchronously and to return a handler (e.g.,\n> > a socket taken by PQsocket in postgres_fdw case) to the transaction\n> > manager. Then the transaction manager continues polling the handler\n> > until it becomes readable and testing the completion using by\n> > xa_complete() with no wait, until all foreign servers return OK on\n> > xa_complete check.\n>\n> Unfortunately, even Oracle and Db2 don't support XA asynchronous execution for years. Our DBMS Symfoware doesn't, either. I don't expect other DBMSs support it.\n>\n> Hmm, I'm afraid this may be one of the FDW's intractable walls for a serious scale-out DBMS. If we define asynchronous FDW routines for 2PC, postgres_fdw would be able to implement them by using libpq asynchronous functions. But other DBMSs can't ...\n\nI think it's not necessarily that all FDW implementations need to be\nable to support xa_complete(). We can support both synchronous and\nasynchronous executions of prepare/commit/rollback.\n\n>\n>\n> > > Maybe we can consider VOLATILE functions update data. That may be\n> > overreaction, though.\n> >\n> > Sorry I don't understand that. The volatile functions are not pushed\n> > down to the foreign servers in the first place, no?\n>\n> Ah, you're right. Then, the choices are twofold: (1) trust users in that their functions don't update data or the user's claim (specification) about it, and (2) get notification through FE/BE protocol that the remote transaction may have updated data.\n>\n\nI'm confused about the point you're concerned about the UDF function.\nIf you're concerned that executing a UDF function by like 'SELECT\nmyfunc();' updates data on a foreign server, since the UDF should know\nwhich foreign server it modifies data on it should be able to register\nthe foreign server and mark as modified. Or you’re concerned that a\nUDF function in WHERE condition is pushed down and updates data (e.g.,\n ‘SELECT … FROM foreign_tbl WHERE id = myfunc()’)?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 18 Sep 2020 22:56:12 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> Yes, but it still seems hard to me that we require for all FDW\r\n> implementations to commit/rollback prepared transactions without the\r\n> possibility of ERROR.\r\n\r\nOf course we can't eliminate the possibility of error, because remote servers require network communication. What I'm saying is to just require the FDW to return error like xa_commit(), not throwing control away with ereport(ERROR). I don't think it's too strict.\r\n\r\n\r\n> I think it's not necessarily that all FDW implementations need to be\r\n> able to support xa_complete(). We can support both synchronous and\r\n> asynchronous executions of prepare/commit/rollback.\r\n\r\nYes, I think parallel prepare and commit can be an option for FDW. But I don't think it's an option for a serious scale-out DBMS. If we want to use FDW as part of PostgreSQL's scale-out infrastructure, we should design (if not implemented in the first version) how the parallelism can be realized. That design is also necessary because it could affect the FDW API.\r\n\r\n\r\n> If you're concerned that executing a UDF function by like 'SELECT\r\n> myfunc();' updates data on a foreign server, since the UDF should know\r\n> which foreign server it modifies data on it should be able to register\r\n> the foreign server and mark as modified. Or you’re concerned that a\r\n> UDF function in WHERE condition is pushed down and updates data (e.g.,\r\n> ‘SELECT … FROM foreign_tbl WHERE id = myfunc()’)?\r\n\r\nWhat I had in mind is \"SELECT myfunc(...) FROM mytable WHERE col = ...;\" Does the UDF call get pushed down to the foreign server in this case? If not now, could it be pushed down in the future? If it could be, it's worth considering how to detect the remote update now.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 22 Sep 2020 01:17:38 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Sep 22, 2020 at 6:48 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n>\n> > I think it's not necessarily that all FDW implementations need to be\n> > able to support xa_complete(). We can support both synchronous and\n> > asynchronous executions of prepare/commit/rollback.\n>\n> Yes, I think parallel prepare and commit can be an option for FDW. But I don't think it's an option for a serious scale-out DBMS. If we want to use FDW as part of PostgreSQL's scale-out infrastructure, we should design (if not implemented in the first version) how the parallelism can be realized. That design is also necessary because it could affect the FDW API.\n\nparallelism here has both pros and cons. If one of the servers errors\nout while preparing for a transaction, there is no point in preparing\nthe transaction on other servers. In parallel execution we will\nprepare on multiple servers before realising that one of them has\nfailed to do so. On the other hand preparing on multiple servers in\nparallel provides a speed up.\n\nBut this can be an improvement on version 1. The current approach\ndoesn't render such an improvement impossible. So if that's something\nhard to do, we should do that in the next version rather than\ncomplicating this patch.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 22 Sep 2020 18:25:57 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\r\n> parallelism here has both pros and cons. If one of the servers errors\r\n> out while preparing for a transaction, there is no point in preparing\r\n> the transaction on other servers. In parallel execution we will\r\n> prepare on multiple servers before realising that one of them has\r\n> failed to do so. On the other hand preparing on multiple servers in\r\n> parallel provides a speed up.\r\n\r\nAnd pros are dominant in practice. If many transactions are erroring out (during prepare), the system is not functioning for the user. Such an application should be corrected before they are put into production.\r\n\r\n\r\n> But this can be an improvement on version 1. The current approach\r\n> doesn't render such an improvement impossible. So if that's something\r\n> hard to do, we should do that in the next version rather than\r\n> complicating this patch.\r\n\r\nCould you share your idea on how the current approach could enable parallelism? This is an important point, because (1) the FDW may not lead us to a seriously competitive scale-out DBMS, and (2) a better FDW API and/or implementation could be considered for non-parallel interaction if we have the realization of parallelism in mind. I think that kind of consideration is the design (for the future).\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Tue, 22 Sep 2020 20:42:50 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Sep 23, 2020 at 2:13 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> > parallelism here has both pros and cons. If one of the servers errors\n> > out while preparing for a transaction, there is no point in preparing\n> > the transaction on other servers. In parallel execution we will\n> > prepare on multiple servers before realising that one of them has\n> > failed to do so. On the other hand preparing on multiple servers in\n> > parallel provides a speed up.\n>\n> And pros are dominant in practice. If many transactions are erroring out (during prepare), the system is not functioning for the user. Such an application should be corrected before they are put into production.\n>\n>\n> > But this can be an improvement on version 1. The current approach\n> > doesn't render such an improvement impossible. So if that's something\n> > hard to do, we should do that in the next version rather than\n> > complicating this patch.\n>\n> Could you share your idea on how the current approach could enable parallelism? This is an important point, because (1) the FDW may not lead us to a seriously competitive scale-out DBMS, and (2) a better FDW API and/or implementation could be considered for non-parallel interaction if we have the realization of parallelism in mind. I think that kind of consideration is the design (for the future).\n>\n\nThe way I am looking at is to put the parallelism in the resolution\nworker and not in the FDW. If we use multiple resolution workers, they\ncan fire commit/abort on multiple foreign servers at a time.\n\nBut if we want parallelism within a single resolution worker, we will\nneed a separate FDW APIs for firing asynchronous commit/abort prepared\ntxn and fetching their results resp. But given the variety of FDWs,\nnot all of them will support asynchronous API, so we have to support\nsynchronous API anyway, which is what can be targeted in the first\nversion.\n\nThinking more about it, the core may support an API which accepts a\nlist of prepared transactions, their foreign servers and user mappings\nand let FDW resolve all those either in parallel or one by one. So\nparallelism is responsibility of FDW and not the core. But then we\nloose parallelism across FDWs, which may not be a common case.\n\nGiven the complications around this, I think we should go ahead\nsupporting synchronous API first and in second version introduce\noptional asynchronous API.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 23 Sep 2020 19:06:01 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 22 Sep 2020 at 10:17, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > Yes, but it still seems hard to me that we require for all FDW\n> > implementations to commit/rollback prepared transactions without the\n> > possibility of ERROR.\n>\n> Of course we can't eliminate the possibility of error, because remote servers require network communication. What I'm saying is to just require the FDW to return error like xa_commit(), not throwing control away with ereport(ERROR). I don't think it's too strict.\n\nSo with your idea, I think we require FDW developers to not call\nereport(ERROR) as much as possible. If they need to use a function\nincluding palloc, lappend etc that could call ereport(ERROR), they\nneed to use PG_TRY() and PG_CATCH() and return the control along with\nthe error message to the transaction manager rather than raising an\nerror. Then the transaction manager will emit the error message at an\nerror level lower than ERROR (e.g., WARNING), and call commit/rollback\nAPI again. But normally we do some cleanup on error but in this case\nthe retrying commit/rollback is performed without any cleanup. Is that\nright? I’m not sure it’s safe though.\n\n>\n>\n> > I think it's not necessarily that all FDW implementations need to be\n> > able to support xa_complete(). We can support both synchronous and\n> > asynchronous executions of prepare/commit/rollback.\n>\n> Yes, I think parallel prepare and commit can be an option for FDW. But I don't think it's an option for a serious scale-out DBMS. If we want to use FDW as part of PostgreSQL's scale-out infrastructure, we should design (if not implemented in the first version) how the parallelism can be realized. That design is also necessary because it could affect the FDW API.\n>\n>\n> > If you're concerned that executing a UDF function by like 'SELECT\n> > myfunc();' updates data on a foreign server, since the UDF should know\n> > which foreign server it modifies data on it should be able to register\n> > the foreign server and mark as modified. Or you’re concerned that a\n> > UDF function in WHERE condition is pushed down and updates data (e.g.,\n> > ‘SELECT … FROM foreign_tbl WHERE id = myfunc()’)?\n>\n> What I had in mind is \"SELECT myfunc(...) FROM mytable WHERE col = ...;\" Does the UDF call get pushed down to the foreign server in this case? If not now, could it be pushed down in the future? If it could be, it's worth considering how to detect the remote update now.\n\nIIUC aggregation functions can be pushed down to the foreign server\nbut I have not idea the normal UDF in the select list is pushed down.\nI wonder if it isn't.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 24 Sep 2020 06:51:46 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> So with your idea, I think we require FDW developers to not call\r\n> ereport(ERROR) as much as possible. If they need to use a function\r\n> including palloc, lappend etc that could call ereport(ERROR), they\r\n> need to use PG_TRY() and PG_CATCH() and return the control along with\r\n> the error message to the transaction manager rather than raising an\r\n> error. Then the transaction manager will emit the error message at an\r\n> error level lower than ERROR (e.g., WARNING), and call commit/rollback\r\n> API again. But normally we do some cleanup on error but in this case\r\n> the retrying commit/rollback is performed without any cleanup. Is that\r\n> right? I’m not sure it’s safe though.\r\n\r\n\r\nYes. It's legitimate to require the FDW commit routine to return control, because the prepare of 2PC is a promise to commit successfully. The second-phase commit should avoid doing that could fail. For example, if some memory is needed for commit, it should be allocated in prepare or before.\r\n\r\n\r\n> IIUC aggregation functions can be pushed down to the foreign server\r\n> but I have not idea the normal UDF in the select list is pushed down.\r\n> I wonder if it isn't.\r\n\r\nOh, that's the current situation. Understood. I thought the UDF call is also pushed down, as I saw Greenplum does so. (Reading the manual, Greenplum disallows data updates in the UDF when it's executed on the remote segment server.)\r\n\r\n(Aren't we overlooking something else that updates data on the remote server while the local server is unaware?)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 24 Sep 2020 08:23:25 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 18 Sep 2020 at 17:00, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 17 Sep 2020 at 14:25, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Aug 21, 2020 at 03:25:29PM +0900, Masahiko Sawada wrote:\n> > > Thank you for letting me know. I've attached the latest version patch set.\n> >\n> > A rebase is needed again as the CF bot is complaining.\n>\n> Thank you for letting me know. I'm updating the patch and splitting\n> into small pieces as Fujii-san suggested. I'll submit the latest patch\n> set early next week.\n>\n\nI've rebased the patch set and split into small pieces. Here are short\ndescriptions of each change:\n\nv26-0001-Recreate-RemoveForeignServerById.patch\n\nThis commit recreates RemoveForeignServerById that was removed by\nb1d32d3e3. This is necessary because we need to check if there is a\nforeign transaction involved with the foreign server that is about to\nbe removed.\n\nv26-0002-Introduce-transaction-manager-for-foreign-transa.patch\n\nThis commit adds the basic foreign transaction manager,\nCommitForeignTransaction, and RollbackForeignTransaction API. These\nAPIs support only one-phase. With this change, FDW is able to control\nits transaction using the foreign transaction manager, not using\nXactCallback.\n\nv26-0003-postgres_fdw-supports-commit-and-rollback-APIs.patch\n\nThis commit implements both CommitForeignTransaction and\nRollbackForeignTransaction APIs in postgres_fdw. Note that since\nPREPARE TRANSACTION is still not supported there is nothing the user\nnewly is able to do.\n\nv26-0004-Add-PrepareForeignTransaction-API.patch\n\nThis commit adds prepared foreign transaction support including WAL\nlogging and recovery, and PrepareForeignTransaction API. With this\nchange, the user is able to do 'PREPARE TRANSACTION' and\n'COMMIT/ROLLBACK PREPARED' commands on the transaction that involves\nforeign servers. But note that COMMIT/ROLLBACK PREPARED ends only the\nlocal transaction. It doesn't do anything for foreign transactions.\nTherefore, the user needs to resolve foreign transactions manually by\nexecuting the pg_resolve_foreign_xacts() SQL function which is also\nintroduced by this commit.\n\nv26-0005-postgres_fdw-supports-prepare-API-and-support-co.patch\n\nThis commit implements PrepareForeignTransaction API and makes\nCommitForeignTransaction and RollbackForeignTransaction supports\ntwo-phase commit.\n\nv26-0006-Add-GetPrepareID-API.patch\n\nThis commit adds GetPrepareID API.\n\nv26-0007-Automatic-foreign-transaciton-resolution-on-COMM.patch\n\nThis commit adds the automatic foreign transaction resolution on\nCOMMIT/ROLLBACK PREPARED by using foreign transaction resolver and\nlauncher processes. With this change, the user is able to\ncommit/rollback the distributed transaction by COMMIT/ROLLBACK\nPREPARED without manual resolution. The involved foreign transactions\nare automatically resolved by a resolver process.\n\nv26-0008-Automatic-foreign-transaciton-resolution-on-comm.patch\n\nThis commit adds the automatic foreign transaction resolution on\ncommit/rollback. With this change, the user is able to commit the\nforeign transactions automatically on commit without executing PREPARE\nTRANSACTION when foreign_twophase_commit is 'required'. IOW, we can\nguarantee that all foreign transactions had been resolved when the\nuser got an acknowledgment of COMMIT.\n\nv26-0009-postgres_fdw-supports-automatically-resolution.patch\n\nThis commit makes postgres_fdw supports the 0008 change.\n\nv26-0010-Documentation-update.patch\nv26-0011-Add-regression-tests-for-foreign-twophase-commit.patch\n\nThe above commits are documentation update and regression tests.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 24 Sep 2020 20:53:39 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\r\n> The way I am looking at is to put the parallelism in the resolution\r\n> worker and not in the FDW. If we use multiple resolution workers, they\r\n> can fire commit/abort on multiple foreign servers at a time.\r\n\r\nFrom a single session's view, yes. However, the requests from multiple sessions are processed one at a time within each resolver, because the resolver has to call the synchronous FDW prepare/commit routines and wait for the response from the remote server. That's too limiting.\r\n\r\n\r\n> But if we want parallelism within a single resolution worker, we will\r\n> need a separate FDW APIs for firing asynchronous commit/abort prepared\r\n> txn and fetching their results resp. But given the variety of FDWs,\r\n> not all of them will support asynchronous API, so we have to support\r\n> synchronous API anyway, which is what can be targeted in the first\r\n> version.\r\n\r\nI agree in that most FDWs will be unlikely to have asynchronous prepare/commit functions, as demonstrated by the fact that even Oracle and Db2 don't implement XA asynchronous APIs. That's one problem of using FDW for Postgres scale-out. When we enhance FDW, we have to take care of other DBMSs to make the FDW interface practical. OTOH, we want to make maximum use of Postgres features, such as libpq asynchronous API, to make Postgres scale-out as performant as possible. But the scale-out design is bound by the FDW interface. I don't feel accepting such less performant design is an attitude of this community, as people here are strict against even 1 or 2 percent performance drop.\r\n\r\n\r\n> Thinking more about it, the core may support an API which accepts a\r\n> list of prepared transactions, their foreign servers and user mappings\r\n> and let FDW resolve all those either in parallel or one by one. So\r\n> parallelism is responsibility of FDW and not the core. But then we\r\n> loose parallelism across FDWs, which may not be a common case.\r\n\r\nHmm, I understand asynchronous FDW relation scan is being developed now, in the form of cooperation between the FDW and the executor. If we make just the FDW responsible for prepare/commit parallelism, the design becomes asymmetric. As you say, I'm not sure if the parallelism is wanted among different types, say, Postgres and Oracle. In fact, major DBMSs don't implement XA asynchronous API. But such lack of parallelism may be one cause of the bad reputation that 2PC (of XA) is slow.\r\n\r\n\r\n> Given the complications around this, I think we should go ahead\r\n> supporting synchronous API first and in second version introduce\r\n> optional asynchronous API.\r\n\r\nHow about the following?\r\n\r\n* Add synchronous and asynchronous versions of prepare/commit/abort routines and a routine to wait for completion of asynchronous execution in FdwRoutine. They are optional.\r\npostgres_fdw can implement the asynchronous routines using libpq asynchronous functions. Other DBMSs can implement XA asynchronous API for them in theory.\r\n\r\n* The client backend uses asynchronous FDW routines if available:\r\n\r\n/* Issue asynchronous prepare | commit | rollback to FDWs that support it */\r\nforeach (per each foreign server used in the transaction)\r\n{\r\n\tif (fdwroutine->{prepare | commit | rollback}_async_func)\r\n\t\tfdwroutine->{prepare | commit | rollback}_async_func(...);\r\n}\r\n\r\n/* Wait for completion of asynchronous prepare | commit | rollback */\r\nforeach (per each foreign server used in the transaction)\r\n{\r\n\tif (fdwroutine->{prepare | commit | rollback}_async_func)\r\n\t\tret = fdwroutine->wait_for_completion(...);\r\n}\r\n\r\n/* Issue synchronous prepare | commit | rollback to FDWs that don't support it */\r\nforeach (per each foreign server used in the transaction)\r\n{\r\n\tif (fdwroutine->{prepare | commit | rollback}_async_func == NULL)\r\n\t\tret = fdwroutine->{prepare | commit | rollback}_func(...);\r\n}\r\n\r\n* The client backend asks the resolver to commit or rollback the remote transaction only when the remote transaction fails (due to the failure of remote server or network.) That is, the resolver is not involved during normal operation.\r\n\r\n\r\nThis will not be complex, and can be included in the first version, if we really want to use FDW for Postgres scale-out.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 24 Sep 2020 22:33:29 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 24 Sep 2020 at 17:23, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > So with your idea, I think we require FDW developers to not call\n> > ereport(ERROR) as much as possible. If they need to use a function\n> > including palloc, lappend etc that could call ereport(ERROR), they\n> > need to use PG_TRY() and PG_CATCH() and return the control along with\n> > the error message to the transaction manager rather than raising an\n> > error. Then the transaction manager will emit the error message at an\n> > error level lower than ERROR (e.g., WARNING), and call commit/rollback\n> > API again. But normally we do some cleanup on error but in this case\n> > the retrying commit/rollback is performed without any cleanup. Is that\n> > right? I’m not sure it’s safe though.\n>\n>\n> Yes. It's legitimate to require the FDW commit routine to return control, because the prepare of 2PC is a promise to commit successfully. The second-phase commit should avoid doing that could fail. For example, if some memory is needed for commit, it should be allocated in prepare or before.\n>\n\nI don't think it's always possible to avoid raising errors in advance.\nConsidering how postgres_fdw can implement your idea, I think\npostgres_fdw would need PG_TRY() and PG_CATCH() for its connection\nmanagement. It has a connection cache in the local memory using HTAB.\nIt needs to create an entry for the first time to connect (e.g., when\nprepare and commit prepared a transaction are performed by different\nprocesses) and it needs to re-connect the foreign server when the\nentry is invalidated. In both cases, ERROR could happen. I guess the\nsame is true for other FDW implementations. Possibly other FDWs might\nneed more work for example cleanup or releasing resources. I think\nthat the pros of your idea are to make the transaction manager simple\nsince we don't need resolvers and launcher but the cons are to bring\nthe complexity to FDW implementation codes instead. Also, IMHO I don't\nthink it's safe way that FDW does neither re-throwing an error nor\nabort transaction when an error occurs.\n\nIn terms of performance you're concerned, I wonder if we can somewhat\neliminate the bottleneck if multiple resolvers are able to run on one\ndatabase in the future. For example, if we could launch resolver\nprocesses as many as connections on the database, individual backend\nprocesses could have one resolver process. Since there would be\ncontention and inter-process communication it still brings some\noverhead but it might be negligible comparing to network round trip.\n\nPerhaps we can hear more opinions on that from other hackers to decide\nthe FDW transaction API design.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 25 Sep 2020 17:35:05 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> I don't think it's always possible to avoid raising errors in advance.\r\n> Considering how postgres_fdw can implement your idea, I think\r\n> postgres_fdw would need PG_TRY() and PG_CATCH() for its connection\r\n> management. It has a connection cache in the local memory using HTAB.\r\n> It needs to create an entry for the first time to connect (e.g., when\r\n> prepare and commit prepared a transaction are performed by different\r\n> processes) and it needs to re-connect the foreign server when the\r\n> entry is invalidated. In both cases, ERROR could happen. I guess the\r\n> same is true for other FDW implementations. Possibly other FDWs might\r\n> need more work for example cleanup or releasing resources. I think\r\n\r\nWhy does the client backend have to create a new connection cache entry during PREPARE or COMMIT PREPARE? Doesn't the client backend naturally continue to use connections that it has used in its current transaction?\r\n\r\n\r\n> that the pros of your idea are to make the transaction manager simple\r\n> since we don't need resolvers and launcher but the cons are to bring\r\n> the complexity to FDW implementation codes instead. Also, IMHO I don't\r\n> think it's safe way that FDW does neither re-throwing an error nor\r\n> abort transaction when an error occurs.\r\n\r\nNo, I didn't say the resolver is unnecessary. The resolver takes care of terminating remote transactions when the client backend encountered an error during COMMIT/ROLLBACK PREPARED.\r\n\r\n\r\n> In terms of performance you're concerned, I wonder if we can somewhat\r\n> eliminate the bottleneck if multiple resolvers are able to run on one\r\n> database in the future. For example, if we could launch resolver\r\n> processes as many as connections on the database, individual backend\r\n> processes could have one resolver process. Since there would be\r\n> contention and inter-process communication it still brings some\r\n> overhead but it might be negligible comparing to network round trip.\r\n\r\nDo you mean that if concurrent 200 clients each update data on two foreign servers, there are 400 resolvers? ...That's overuse of resources.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\t\r\n",
"msg_date": "Fri, 25 Sep 2020 09:21:39 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 25 Sep 2020 at 18:21, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > I don't think it's always possible to avoid raising errors in advance.\n> > Considering how postgres_fdw can implement your idea, I think\n> > postgres_fdw would need PG_TRY() and PG_CATCH() for its connection\n> > management. It has a connection cache in the local memory using HTAB.\n> > It needs to create an entry for the first time to connect (e.g., when\n> > prepare and commit prepared a transaction are performed by different\n> > processes) and it needs to re-connect the foreign server when the\n> > entry is invalidated. In both cases, ERROR could happen. I guess the\n> > same is true for other FDW implementations. Possibly other FDWs might\n> > need more work for example cleanup or releasing resources. I think\n>\n> Why does the client backend have to create a new connection cache entry during PREPARE or COMMIT PREPARE? Doesn't the client backend naturally continue to use connections that it has used in its current transaction?\n\nI think there are two cases: a process executes PREPARE TRANSACTION\nand another process executes COMMIT PREPARED later, and if the\ncoordinator has cascaded foreign servers (i.g., a foreign server has\nits foreign server) and temporary connection problem happens in the\nintermediate node after PREPARE then another process on the\nintermediate node will execute COMMIT PREPARED on its foreign server.\n\n>\n>\n> > that the pros of your idea are to make the transaction manager simple\n> > since we don't need resolvers and launcher but the cons are to bring\n> > the complexity to FDW implementation codes instead. Also, IMHO I don't\n> > think it's safe way that FDW does neither re-throwing an error nor\n> > abort transaction when an error occurs.\n>\n> No, I didn't say the resolver is unnecessary. The resolver takes care of terminating remote transactions when the client backend encountered an error during COMMIT/ROLLBACK PREPARED.\n\nUnderstood. With your idea, we can remove at least the code of making\nbackend wait and inter-process communication between backends and\nresolvers.\n\nI think we need to consider that it's really safe and what needs to\nachieve your idea safely.\n\n>\n>\n> > In terms of performance you're concerned, I wonder if we can somewhat\n> > eliminate the bottleneck if multiple resolvers are able to run on one\n> > database in the future. For example, if we could launch resolver\n> > processes as many as connections on the database, individual backend\n> > processes could have one resolver process. Since there would be\n> > contention and inter-process communication it still brings some\n> > overhead but it might be negligible comparing to network round trip.\n>\n> Do you mean that if concurrent 200 clients each update data on two foreign servers, there are 400 resolvers? ...That's overuse of resources.\n\nI think we have 200 resolvers in this case since one resolver process\nper backend process. Or another idea is that all processes queue\nforeign transactions to resolve into the shared memory queue and\nresolver processes fetch and resolve them instead of assigning one\ndistributed transaction to one resolver process. Using asynchronous\nexecution, the resolver process can process a bunch of foreign\ntransactions across distributed transactions and grouped by the\nforeign server at once. It might be more complex than the current\napproach but having multiple resolver processes on one database would\nincrease through-put well especially by combining with asynchronous\nexecution.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 25 Sep 2020 23:11:27 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> On Fri, 25 Sep 2020 at 18:21, tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > Why does the client backend have to create a new connection cache entry\r\n> during PREPARE or COMMIT PREPARE? Doesn't the client backend naturally\r\n> continue to use connections that it has used in its current transaction?\r\n> \r\n> I think there are two cases: a process executes PREPARE TRANSACTION\r\n> and another process executes COMMIT PREPARED later, and if the\r\n> coordinator has cascaded foreign servers (i.g., a foreign server has\r\n> its foreign server) and temporary connection problem happens in the\r\n> intermediate node after PREPARE then another process on the\r\n> intermediate node will execute COMMIT PREPARED on its foreign server.\r\n\r\nAren't both the cases failure cases, and thus handled by the resolver?\r\n\r\n\r\n> > > In terms of performance you're concerned, I wonder if we can somewhat\r\n> > > eliminate the bottleneck if multiple resolvers are able to run on one\r\n> > > database in the future. For example, if we could launch resolver\r\n> > > processes as many as connections on the database, individual backend\r\n> > > processes could have one resolver process. Since there would be\r\n> > > contention and inter-process communication it still brings some\r\n> > > overhead but it might be negligible comparing to network round trip.\r\n> >\r\n> > Do you mean that if concurrent 200 clients each update data on two foreign\r\n> servers, there are 400 resolvers? ...That's overuse of resources.\r\n> \r\n> I think we have 200 resolvers in this case since one resolver process\r\n> per backend process.\r\n\r\nThat does not parallelize prepare or commit for a single client, as each resolver can process only one prepare or commit synchronously at a time. Not to mention the resource usage is high.\r\n\r\n\r\n> Or another idea is that all processes queue\r\n> foreign transactions to resolve into the shared memory queue and\r\n> resolver processes fetch and resolve them instead of assigning one\r\n> distributed transaction to one resolver process. Using asynchronous\r\n> execution, the resolver process can process a bunch of foreign\r\n> transactions across distributed transactions and grouped by the\r\n> foreign server at once. It might be more complex than the current\r\n> approach but having multiple resolver processes on one database would\r\n> increase through-put well especially by combining with asynchronous\r\n> execution.\r\n\r\nYeah, that sounds complex. It's simpler and natural for each client backend to use the connections it has used in its current transaction and issue prepare and commit to the foreign servers, and the resolver just takes care of failed commits and aborts behind the scenes. That's like the walwriter takes care of writing WAL based on the client backend that commits asynchronously.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 28 Sep 2020 04:58:19 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, 28 Sep 2020 at 13:58, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > On Fri, 25 Sep 2020 at 18:21, tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > > Why does the client backend have to create a new connection cache entry\n> > during PREPARE or COMMIT PREPARE? Doesn't the client backend naturally\n> > continue to use connections that it has used in its current transaction?\n> >\n> > I think there are two cases: a process executes PREPARE TRANSACTION\n> > and another process executes COMMIT PREPARED later, and if the\n> > coordinator has cascaded foreign servers (i.g., a foreign server has\n> > its foreign server) and temporary connection problem happens in the\n> > intermediate node after PREPARE then another process on the\n> > intermediate node will execute COMMIT PREPARED on its foreign server.\n>\n> Aren't both the cases failure cases, and thus handled by the resolver?\n\nNo. Please imagine a case where a user executes PREPARE TRANSACTION on\nthe transaction that modified data on foreign servers. The backend\nprocess prepares both the local transaction and foreign transactions.\nBut another client can execute COMMIT PREPARED on the prepared\ntransaction. In this case, another backend newly connects foreign\nservers and commits prepared foreign transactions. Therefore, the new\nconnection cache entry can be created during COMMIT PREPARED which\ncould lead to an error but since the local prepared transaction is\nalready committed the backend must not fail with an error.\n\nIn the latter case, I’m assumed that the backend continues to retry\nforeign transaction resolution until the user requests cancellation.\nPlease imagine the case where the server-A connects a foreign server\n(say, server-B) and server-B connects another foreign server (say,\nserver-C). The transaction initiated on server-A modified the data on\nboth local and server-B which further modified the data on server-C\nand executed COMMIT. The backend process on server-A (say, backend-A)\nsends PREPARE TRANSACTION to server-B then the backend process on\nserver-B (say, backend-B) connected by backend-A prepares the local\ntransaction and further sends PREPARE TRANSACTION to server-C. Let’s\nsuppose a temporary connection failure happens between server-A and\nserver-B before the backend-A sending COMMIT PREPARED (i.g, 2nd phase\nof 2PC). When the backend-A attempts to sends COMMIT PREPARED to\nserver-B it realizes that the connection to server-B was lost but\nsince the user doesn’t request cancellatino yet the backend-A retries\nto connect server-B and suceeds. Since now that the backend-A\nestablished a new connection to server-B, there is another backend\nprocess on server-B (say, backend-B’). Since the backend-B’ doen’t\nhave a connection to server-C yet, it creates new connection cache\nentry, which could lead to an error. IOW, on server-B different\nprocesses performed PREPARE TRANSACTION and COMMIT PREPARED and the\nlater process created a connection cache entry.\n\n>\n>\n> > > > In terms of performance you're concerned, I wonder if we can somewhat\n> > > > eliminate the bottleneck if multiple resolvers are able to run on one\n> > > > database in the future. For example, if we could launch resolver\n> > > > processes as many as connections on the database, individual backend\n> > > > processes could have one resolver process. Since there would be\n> > > > contention and inter-process communication it still brings some\n> > > > overhead but it might be negligible comparing to network round trip.\n> > >\n> > > Do you mean that if concurrent 200 clients each update data on two foreign\n> > servers, there are 400 resolvers? ...That's overuse of resources.\n> >\n> > I think we have 200 resolvers in this case since one resolver process\n> > per backend process.\n>\n> That does not parallelize prepare or commit for a single client, as each resolver can process only one prepare or commit synchronously at a time. Not to mention the resource usage is high.\n\nWell, I think we should discuss parallel (and/or asyncronous)\nexecution of prepare and commit separated from the discussion on\nwhether the resolver process is responsible for 2nd phase of 2PC. I've\nbeen suggesting that the first phase and the second phase of 2PC\nshould be performed by different processes in terms of safety. And\nhaving multiple resolvers on one database is my suggestion in response\nto the concern you raised that one resolver process on one database\ncan be bottleneck. Both parallel executionand asynchronous execution\nare slightly related to this topic but I think it should be discussed\nseparately.\n\nRegarding parallel and asynchronous execution, I basically agree on\nsupporting asynchronous execution as the XA specification also has,\nalthough I think it's better not to include it in the first version\nfor simplisity.\n\nOverall, my suggestion for the first version is to support synchronous\nexecution of prepare, commit, and rollback, have one resolver process\nper database, and have resolver take 2nd phase of 2PC. As the next\nstep we can add APIs for asynchronous execution, have multiple\nresolvers on one database and so on.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 28 Sep 2020 21:42:48 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> No. Please imagine a case where a user executes PREPARE TRANSACTION on\r\n> the transaction that modified data on foreign servers. The backend\r\n> process prepares both the local transaction and foreign transactions.\r\n> But another client can execute COMMIT PREPARED on the prepared\r\n> transaction. In this case, another backend newly connects foreign\r\n> servers and commits prepared foreign transactions. Therefore, the new\r\n> connection cache entry can be created during COMMIT PREPARED which\r\n> could lead to an error but since the local prepared transaction is\r\n> already committed the backend must not fail with an error.\r\n> \r\n> In the latter case, I’m assumed that the backend continues to retry\r\n> foreign transaction resolution until the user requests cancellation.\r\n> Please imagine the case where the server-A connects a foreign server\r\n> (say, server-B) and server-B connects another foreign server (say,\r\n> server-C). The transaction initiated on server-A modified the data on\r\n> both local and server-B which further modified the data on server-C\r\n> and executed COMMIT. The backend process on server-A (say, backend-A)\r\n> sends PREPARE TRANSACTION to server-B then the backend process on\r\n> server-B (say, backend-B) connected by backend-A prepares the local\r\n> transaction and further sends PREPARE TRANSACTION to server-C. Let’s\r\n> suppose a temporary connection failure happens between server-A and\r\n> server-B before the backend-A sending COMMIT PREPARED (i.g, 2nd phase\r\n> of 2PC). When the backend-A attempts to sends COMMIT PREPARED to\r\n> server-B it realizes that the connection to server-B was lost but\r\n> since the user doesn’t request cancellatino yet the backend-A retries\r\n> to connect server-B and suceeds. Since now that the backend-A\r\n> established a new connection to server-B, there is another backend\r\n> process on server-B (say, backend-B’). Since the backend-B’ doen’t\r\n> have a connection to server-C yet, it creates new connection cache\r\n> entry, which could lead to an error. IOW, on server-B different\r\n> processes performed PREPARE TRANSACTION and COMMIT PREPARED and\r\n> the\r\n> later process created a connection cache entry.\r\n\r\nThank you, I understood the situation. I don't think it's a good design to not address practical performance during normal operation by fearing the rare error case.\r\n\r\nThe transaction manager (TM) or the FDW implementor can naturally do things like the following:\r\n\r\n* Use palloc_extended(MCXT_ALLOC_NO_OOM) and hash_search(HASH_ENTER_NULL) to return control to the caller.\r\n\r\n* Use PG_TRY(), as its overhead is relatively negligible to connection establishment.\r\n\r\n* If the commit fails, the TM asks the resolver to take care of committing the remote transaction, and returns success to the user.\r\n\r\n\r\n> Regarding parallel and asynchronous execution, I basically agree on\r\n> supporting asynchronous execution as the XA specification also has,\r\n> although I think it's better not to include it in the first version\r\n> for simplisity.\r\n> \r\n> Overall, my suggestion for the first version is to support synchronous\r\n> execution of prepare, commit, and rollback, have one resolver process\r\n> per database, and have resolver take 2nd phase of 2PC. As the next\r\n> step we can add APIs for asynchronous execution, have multiple\r\n> resolvers on one database and so on.\r\n\r\nWe don't have to rush to commit a patch that is likely to exhibit non-practical performance, as we still have much time left for PG 14. The design needs to be more thought for the ideal goal and refined. By making efforts to sort through the ideal design, we may be able to avoid rework and API inconsistency. As for the API, we haven't validated yet that the FDW implementor can use XA, have we?\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 29 Sep 2020 02:36:53 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 29 Sep 2020 at 11:37, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > No. Please imagine a case where a user executes PREPARE TRANSACTION on\n> > the transaction that modified data on foreign servers. The backend\n> > process prepares both the local transaction and foreign transactions.\n> > But another client can execute COMMIT PREPARED on the prepared\n> > transaction. In this case, another backend newly connects foreign\n> > servers and commits prepared foreign transactions. Therefore, the new\n> > connection cache entry can be created during COMMIT PREPARED which\n> > could lead to an error but since the local prepared transaction is\n> > already committed the backend must not fail with an error.\n> >\n> > In the latter case, I’m assumed that the backend continues to retry\n> > foreign transaction resolution until the user requests cancellation.\n> > Please imagine the case where the server-A connects a foreign server\n> > (say, server-B) and server-B connects another foreign server (say,\n> > server-C). The transaction initiated on server-A modified the data on\n> > both local and server-B which further modified the data on server-C\n> > and executed COMMIT. The backend process on server-A (say, backend-A)\n> > sends PREPARE TRANSACTION to server-B then the backend process on\n> > server-B (say, backend-B) connected by backend-A prepares the local\n> > transaction and further sends PREPARE TRANSACTION to server-C. Let’s\n> > suppose a temporary connection failure happens between server-A and\n> > server-B before the backend-A sending COMMIT PREPARED (i.g, 2nd phase\n> > of 2PC). When the backend-A attempts to sends COMMIT PREPARED to\n> > server-B it realizes that the connection to server-B was lost but\n> > since the user doesn’t request cancellatino yet the backend-A retries\n> > to connect server-B and suceeds. Since now that the backend-A\n> > established a new connection to server-B, there is another backend\n> > process on server-B (say, backend-B’). Since the backend-B’ doen’t\n> > have a connection to server-C yet, it creates new connection cache\n> > entry, which could lead to an error. IOW, on server-B different\n> > processes performed PREPARE TRANSACTION and COMMIT PREPARED and\n> > the\n> > later process created a connection cache entry.\n>\n> Thank you, I understood the situation. I don't think it's a good design to not address practical performance during normal operation by fearing the rare error case.\n>\n> The transaction manager (TM) or the FDW implementor can naturally do things like the following:\n>\n> * Use palloc_extended(MCXT_ALLOC_NO_OOM) and hash_search(HASH_ENTER_NULL) to return control to the caller.\n>\n> * Use PG_TRY(), as its overhead is relatively negligible to connection establishment.\n\nI suppose you mean that the FDW implementor uses PG_TRY() to catch an\nerror but not do PG_RE_THROW(). I'm concerned that it's safe to return\nthe control to the caller and continue trying to resolve foreign\ntransactions without neither rethrowing an error nor transaction\nabort.\n\nIMHO, it's rather a bad design something like \"high performance but\ndoesn't work fine in a rare failure case\", especially for the\ntransaction management feature.\n\n>\n> * If the commit fails, the TM asks the resolver to take care of committing the remote transaction, and returns success to the user.\n>\n>\n> > Regarding parallel and asynchronous execution, I basically agree on\n> > supporting asynchronous execution as the XA specification also has,\n> > although I think it's better not to include it in the first version\n> > for simplisity.\n> >\n> > Overall, my suggestion for the first version is to support synchronous\n> > execution of prepare, commit, and rollback, have one resolver process\n> > per database, and have resolver take 2nd phase of 2PC. As the next\n> > step we can add APIs for asynchronous execution, have multiple\n> > resolvers on one database and so on.\n>\n> We don't have to rush to commit a patch that is likely to exhibit non-practical performance, as we still have much time left for PG 14. The design needs to be more thought for the ideal goal and refined. By making efforts to sort through the ideal design, we may be able to avoid rework and API inconsistency. As for the API, we haven't validated yet that the FDW implementor can use XA, have we?\n\nYes, we still need to check if FDW implementor other than postgres_fdw\nis able to support these APIs. I agree that we need more discussion on\nthe design. My suggestion is to start a small, simple feature as the\nfirst step and not try to include everything in the first version.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 29 Sep 2020 15:03:12 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 29 Sep 2020 at 15:03, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 29 Sep 2020 at 11:37, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > No. Please imagine a case where a user executes PREPARE TRANSACTION on\n> > > the transaction that modified data on foreign servers. The backend\n> > > process prepares both the local transaction and foreign transactions.\n> > > But another client can execute COMMIT PREPARED on the prepared\n> > > transaction. In this case, another backend newly connects foreign\n> > > servers and commits prepared foreign transactions. Therefore, the new\n> > > connection cache entry can be created during COMMIT PREPARED which\n> > > could lead to an error but since the local prepared transaction is\n> > > already committed the backend must not fail with an error.\n> > >\n> > > In the latter case, I’m assumed that the backend continues to retry\n> > > foreign transaction resolution until the user requests cancellation.\n> > > Please imagine the case where the server-A connects a foreign server\n> > > (say, server-B) and server-B connects another foreign server (say,\n> > > server-C). The transaction initiated on server-A modified the data on\n> > > both local and server-B which further modified the data on server-C\n> > > and executed COMMIT. The backend process on server-A (say, backend-A)\n> > > sends PREPARE TRANSACTION to server-B then the backend process on\n> > > server-B (say, backend-B) connected by backend-A prepares the local\n> > > transaction and further sends PREPARE TRANSACTION to server-C. Let’s\n> > > suppose a temporary connection failure happens between server-A and\n> > > server-B before the backend-A sending COMMIT PREPARED (i.g, 2nd phase\n> > > of 2PC). When the backend-A attempts to sends COMMIT PREPARED to\n> > > server-B it realizes that the connection to server-B was lost but\n> > > since the user doesn’t request cancellatino yet the backend-A retries\n> > > to connect server-B and suceeds. Since now that the backend-A\n> > > established a new connection to server-B, there is another backend\n> > > process on server-B (say, backend-B’). Since the backend-B’ doen’t\n> > > have a connection to server-C yet, it creates new connection cache\n> > > entry, which could lead to an error. IOW, on server-B different\n> > > processes performed PREPARE TRANSACTION and COMMIT PREPARED and\n> > > the\n> > > later process created a connection cache entry.\n> >\n> > Thank you, I understood the situation. I don't think it's a good design to not address practical performance during normal operation by fearing the rare error case.\n> >\n> > The transaction manager (TM) or the FDW implementor can naturally do things like the following:\n> >\n> > * Use palloc_extended(MCXT_ALLOC_NO_OOM) and hash_search(HASH_ENTER_NULL) to return control to the caller.\n> >\n> > * Use PG_TRY(), as its overhead is relatively negligible to connection establishment.\n>\n> I suppose you mean that the FDW implementor uses PG_TRY() to catch an\n> error but not do PG_RE_THROW(). I'm concerned that it's safe to return\n> the control to the caller and continue trying to resolve foreign\n> transactions without neither rethrowing an error nor transaction\n> abort.\n>\n> IMHO, it's rather a bad design something like \"high performance but\n> doesn't work fine in a rare failure case\", especially for the\n> transaction management feature.\n\nTo avoid misunderstanding, I didn't mean to disregard the performance.\nI mean especially for the transaction management feature it's\nessential to work fine even in failure cases. So I hope we have a\nsafe, robust, and probably simple design for the first version that\nmight be low performance yet though but have a potential for\nperformance improvement and we will be able to try to improve\nperformance later.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 29 Sep 2020 15:42:01 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> To avoid misunderstanding, I didn't mean to disregard the performance.\r\n> I mean especially for the transaction management feature it's\r\n> essential to work fine even in failure cases. So I hope we have a\r\n> safe, robust, and probably simple design for the first version that\r\n> might be low performance yet though but have a potential for\r\n> performance improvement and we will be able to try to improve\r\n> performance later.\r\n\r\nYes, correctness (safety?) is a basic premise. I understand that given the time left for PG 14, we haven't yet given up a sound design that offers practical or normally expected performance. I don't think the design has not well thought yet to see if it's simple or complex. At least, I don't believe doing \"send commit request, perform commit on a remote server, and wait for reply\" sequence one transaction at a time in turn is what this community (and other DBMSs) tolerate. A kid's tricycle is safe, but it's not safe to ride a tricycle on the road. Let's not rush to commit and do our best!\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 30 Sep 2020 07:02:01 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, 30 Sep 2020 at 16:02, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > To avoid misunderstanding, I didn't mean to disregard the performance.\n> > I mean especially for the transaction management feature it's\n> > essential to work fine even in failure cases. So I hope we have a\n> > safe, robust, and probably simple design for the first version that\n> > might be low performance yet though but have a potential for\n> > performance improvement and we will be able to try to improve\n> > performance later.\n>\n> Yes, correctness (safety?) is a basic premise. I understand that given the time left for PG 14, we haven't yet given up a sound design that offers practical or normally expected performance. I don't think the design has not well thought yet to see if it's simple or complex. At least, I don't believe doing \"send commit request, perform commit on a remote server, and wait for reply\" sequence one transaction at a time in turn is what this community (and other DBMSs) tolerate. A kid's tricycle is safe, but it's not safe to ride a tricycle on the road. Let's not rush to commit and do our best!\n\nOkay. I'd like to resolve my concern that I repeatedly mentioned and\nwe don't find a good solution yet. That is, how we handle errors\nraised by FDW transaction callbacks during committing/rolling back\nprepared foreign transactions. Actually, this has already been\ndiscussed before[1] and we concluded at that time that using a\nbackground worker to commit/rolling back foreign prepared transactions\nis the best way.\n\nAnyway, let me summarize the discussion on this issue so far. With\nyour idea, after the local commit, the backend process directly call\ntransaction FDW API to commit the foreign prepared transactions.\nHowever, it's likely to happen an error (i.g. ereport(ERROR)) during\nthat due to various reasons. It could be an OOM by memory allocation,\nconnection error whatever. In case an error happens during committing\nprepared foreign transactions, the user will get the error but it's\ntoo late. The local transaction and possibly other foreign prepared\ntransaction have already been committed. You proposed the first idea\nto avoid such a situation that FDW implementor can write the code\nwhile trying to reduce the possibility of errors happening as much as\npossible, for example by usingpalloc_extended(MCXT_ALLOC_NO_OOM) and\nhash_search(HASH_ENTER_NULL) but I think it's not a comprehensive\nsolution. They might miss, not know it, or use other functions\nprovided by the core that could lead an error. Another idea is to use\nPG_TRY() and PG_CATCH(). IIUC with this idea, FDW implementor catches\nan error but ignores it rather than rethrowing by PG_RE_THROW() in\norder to return the control to the core after an error. I’m really not\nsure it’s a correct usage of those macros. In addition, after\nreturning to the core, it will retry to resolve the same or other\nforeign transactions. That is, after ignoring an error, the core needs\nto continue working and possibly call transaction callbacks of other\nFDW implementations.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoY%3DVkHrzXD%3Djw5DA%2BPp-ePW_6_v5n%2BTJk40s5Q9VXY-Pw%40mail.gmail.com\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 1 Oct 2020 23:00:41 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> You proposed the first idea\r\n> to avoid such a situation that FDW implementor can write the code\r\n> while trying to reduce the possibility of errors happening as much as\r\n> possible, for example by usingpalloc_extended(MCXT_ALLOC_NO_OOM) and\r\n> hash_search(HASH_ENTER_NULL) but I think it's not a comprehensive\r\n> solution. They might miss, not know it, or use other functions\r\n> provided by the core that could lead an error.\r\n\r\nWe can give the guideline in the manual, can't we? It should not be especially difficult for the FDW implementor compared to other Postgres's extensibility features that have their own rules -- table/index AM, user-defined C function, trigger function in C, user-defined data types, hooks, etc. And, the Postgres functions that the FDW implementor would use to implement their commit will be very limited, won't they? Because most of the commit processing is performed in the resource manager's library (e.g. Oracle and MySQL client library.)\r\n\r\n(Before that, the developer of server-side modules is not given any information on what functions (like palloc) are available in the manual, is he?)\r\n\r\n\r\n> Another idea is to use\r\n> PG_TRY() and PG_CATCH(). IIUC with this idea, FDW implementor catches\r\n> an error but ignores it rather than rethrowing by PG_RE_THROW() in\r\n> order to return the control to the core after an error. I’m really not\r\n> sure it’s a correct usage of those macros. In addition, after\r\n> returning to the core, it will retry to resolve the same or other\r\n> foreign transactions. That is, after ignoring an error, the core needs\r\n> to continue working and possibly call transaction callbacks of other\r\n> FDW implementations.\r\n\r\nNo, not ignore the error. The FDW can emit a WARNING, LOG, or NOTICE message, and return an error code to TM. TM can also emit a message like:\r\n\r\nWARNING: failed to commit part of a transaction on the foreign server 'XXX'\r\nHINT: The server continues to try committing the remote transaction.\r\n\r\nThen TM asks the resolver to take care of committing the remote transaction, and acknowledge the commit success to the client. The relevant return codes of xa_commit() are:\r\n\r\n--------------------------------------------------\r\n[XAER_RMERR] \r\nAn error occurred in committing the work performed on behalf of the transaction \r\nbranch and the branch’s work has been rolled back. Note that returning this error \r\nsignals a catastrophic event to a transaction manager since other resource \r\nmanagers may successfully commit their work on behalf of this branch. This error \r\nshould be returned only when a resource manager concludes that it can never \r\ncommit the branch and that it cannot hold the branch’s resources in a prepared \r\nstate. Otherwise, [XA_RETRY] should be returned. \r\n\r\n[XAER_RMFAIL] \r\nAn error occurred that makes the resource manager unavailable. \r\n--------------------------------------------------\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 2 Oct 2020 09:20:32 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 2 Oct 2020 at 18:20, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > You proposed the first idea\n> > to avoid such a situation that FDW implementor can write the code\n> > while trying to reduce the possibility of errors happening as much as\n> > possible, for example by usingpalloc_extended(MCXT_ALLOC_NO_OOM) and\n> > hash_search(HASH_ENTER_NULL) but I think it's not a comprehensive\n> > solution. They might miss, not know it, or use other functions\n> > provided by the core that could lead an error.\n>\n> We can give the guideline in the manual, can't we? It should not be especially difficult for the FDW implementor compared to other Postgres's extensibility features that have their own rules -- table/index AM, user-defined C function, trigger function in C, user-defined data types, hooks, etc. And, the Postgres functions that the FDW implementor would use to implement their commit will be very limited, won't they? Because most of the commit processing is performed in the resource manager's library (e.g. Oracle and MySQL client library.)\n\nYeah, if we think FDW implementors properly implement these APIs while\nfollowing the guideline, giving the guideline is a good idea. But I’m\nnot sure all FDW implementors are able to do that and even if the user\nuses an FDW whose transaction APIs don’t follow the guideline, the\nuser won’t realize it. IMO it’s better to design the feature while not\ndepending on external programs for reliability (correctness?) of this\nfeature, although I might be too worried.\n\n>\n>\n> > Another idea is to use\n> > PG_TRY() and PG_CATCH(). IIUC with this idea, FDW implementor catches\n> > an error but ignores it rather than rethrowing by PG_RE_THROW() in\n> > order to return the control to the core after an error. I’m really not\n> > sure it’s a correct usage of those macros. In addition, after\n> > returning to the core, it will retry to resolve the same or other\n> > foreign transactions. That is, after ignoring an error, the core needs\n> > to continue working and possibly call transaction callbacks of other\n> > FDW implementations.\n>\n> No, not ignore the error. The FDW can emit a WARNING, LOG, or NOTICE message, and return an error code to TM. TM can also emit a message like:\n>\n> WARNING: failed to commit part of a transaction on the foreign server 'XXX'\n> HINT: The server continues to try committing the remote transaction.\n>\n> Then TM asks the resolver to take care of committing the remote transaction, and acknowledge the commit success to the client.\n\nIt seems like if failed to resolve, the backend would return an\nacknowledgment of COMMIT to the client and the resolver process\nresolves foreign prepared transactions in the background. So we can\nensure that the distributed transaction is completed at the time when\nthe client got an acknowledgment of COMMIT if 2nd phase of 2PC is\nsuccessfully completed in the first attempts. OTOH, if it failed for\nwhatever reason, there is no such guarantee. From an optimistic\nperspective, i.g., the failures are unlikely to happen, it will work\nwell but IMO it’s not uncommon to fail to resolve foreign transactions\ndue to network issue, especially in an unreliable network environment\nfor example geo-distributed database. So I think it will end up\nrequiring the client to check if preceding distributed transactions\nare completed or not in order to see the results of these\ntransactions.\n\nWe could retry the foreign transaction resolution before leaving it to\nthe resolver process but the problem that the core continues trying to\nresolve foreign transactions without neither transaction aborting and\nrethrowing even after an error still remains.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 6 Oct 2020 22:52:11 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Oct 6, 2020 at 7:22 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 2 Oct 2020 at 18:20, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > You proposed the first idea\n> > > to avoid such a situation that FDW implementor can write the code\n> > > while trying to reduce the possibility of errors happening as much as\n> > > possible, for example by usingpalloc_extended(MCXT_ALLOC_NO_OOM) and\n> > > hash_search(HASH_ENTER_NULL) but I think it's not a comprehensive\n> > > solution. They might miss, not know it, or use other functions\n> > > provided by the core that could lead an error.\n> >\n> > We can give the guideline in the manual, can't we? It should not be especially difficult for the FDW implementor compared to other Postgres's extensibility features that have their own rules -- table/index AM, user-defined C function, trigger function in C, user-defined data types, hooks, etc. And, the Postgres functions that the FDW implementor would use to implement their commit will be very limited, won't they? Because most of the commit processing is performed in the resource manager's library (e.g. Oracle and MySQL client library.)\n>\n> Yeah, if we think FDW implementors properly implement these APIs while\n> following the guideline, giving the guideline is a good idea. But I’m\n> not sure all FDW implementors are able to do that and even if the user\n> uses an FDW whose transaction APIs don’t follow the guideline, the\n> user won’t realize it. IMO it’s better to design the feature while not\n> depending on external programs for reliability (correctness?) of this\n> feature, although I might be too worried.\n>\n\n+1 for that. I don't think it's even in the hands of implementers to\navoid throwing an error in all the conditions.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 7 Oct 2020 17:00:52 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Oct 6, 2020 at 10:52 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 2 Oct 2020 at 18:20, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > You proposed the first idea\n> > > to avoid such a situation that FDW implementor can write the code\n> > > while trying to reduce the possibility of errors happening as much as\n> > > possible, for example by usingpalloc_extended(MCXT_ALLOC_NO_OOM) and\n> > > hash_search(HASH_ENTER_NULL) but I think it's not a comprehensive\n> > > solution. They might miss, not know it, or use other functions\n> > > provided by the core that could lead an error.\n> >\n> > We can give the guideline in the manual, can't we? It should not be especially difficult for the FDW implementor compared to other Postgres's extensibility features that have their own rules -- table/index AM, user-defined C function, trigger function in C, user-defined data types, hooks, etc. And, the Postgres functions that the FDW implementor would use to implement their commit will be very limited, won't they? Because most of the commit processing is performed in the resource manager's library (e.g. Oracle and MySQL client library.)\n>\n> Yeah, if we think FDW implementors properly implement these APIs while\n> following the guideline, giving the guideline is a good idea. But I’m\n> not sure all FDW implementors are able to do that and even if the user\n> uses an FDW whose transaction APIs don’t follow the guideline, the\n> user won’t realize it. IMO it’s better to design the feature while not\n> depending on external programs for reliability (correctness?) of this\n> feature, although I might be too worried.\n>\n\nAfter more thoughts on Tsunakawa-san’s idea it seems to need the\nfollowing conditions:\n\n* At least postgres_fdw is viable to implement these APIs while\nguaranteeing not to happen any error.\n* A certain number of FDWs (or majority of FDWs) can do that in a\nsimilar way to postgres_fdw by using the guideline and probably\npostgres_fdw as a reference.\n\nThese are necessary for FDW implementors to implement APIs while\nfollowing the guideline and for the core to trust them.\n\nAs far as postgres_fdw goes, what we need to do when committing a\nforeign transaction resolution is to get a connection from the\nconnection cache or create and connect if not found, construct a SQL\nquery (COMMIT/ROLLBACK PREPARED with identifier) using a fixed-size\nbuffer, send the query, and get the result. The possible place to\nraise an error is limited. In case of failures such as connection\nerror FDW can return false to the core along with a flag indicating to\nask the core retry. Then the core will retry to resolve foreign\ntransactions after some sleep. OTOH if FDW sized up that there is no\nhope of resolving the foreign transaction, it also could return false\nto the core along with another flag indicating to remove the entry and\nnot to retry. Also, the transaction resolution by FDW needs to be\ncancellable (interruptible) but cannot use CHECK_FOR_INTERRUPTS().\n\nProbably, as Tsunakawa-san also suggested, it’s not impossible to\nimplement these APIs in postgres_fdw while guaranteeing not to happen\nany error, although not sure the code complexity. So I think the first\ncondition may be true but not sure about the second assumption,\nparticularly about the interruptible part.\n\nI thought we could support both ideas to get their pros; supporting\nTsunakawa-san's idea and then my idea if necessary, and FDW can choose\nwhether to ask the resolver process to perform 2nd phase of 2PC or\nnot. But it's not a good idea in terms of complexity.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 8 Oct 2020 13:37:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Sorry to be late to respond. (My PC is behaving strangely after upgrading Win10 2004)\r\n\r\nFrom: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> After more thoughts on Tsunakawa-san’s idea it seems to need the\r\n> following conditions:\r\n> \r\n> * At least postgres_fdw is viable to implement these APIs while\r\n> guaranteeing not to happen any error.\r\n> * A certain number of FDWs (or majority of FDWs) can do that in a\r\n> similar way to postgres_fdw by using the guideline and probably\r\n> postgres_fdw as a reference.\r\n> \r\n> These are necessary for FDW implementors to implement APIs while\r\n> following the guideline and for the core to trust them.\r\n> \r\n> As far as postgres_fdw goes, what we need to do when committing a\r\n> foreign transaction resolution is to get a connection from the\r\n> connection cache or create and connect if not found, construct a SQL\r\n> query (COMMIT/ROLLBACK PREPARED with identifier) using a fixed-size\r\n> buffer, send the query, and get the result. The possible place to\r\n> raise an error is limited. In case of failures such as connection\r\n> error FDW can return false to the core along with a flag indicating to\r\n> ask the core retry. Then the core will retry to resolve foreign\r\n> transactions after some sleep. OTOH if FDW sized up that there is no\r\n> hope of resolving the foreign transaction, it also could return false\r\n> to the core along with another flag indicating to remove the entry and\r\n> not to retry. Also, the transaction resolution by FDW needs to be\r\n> cancellable (interruptible) but cannot use CHECK_FOR_INTERRUPTS().\r\n> \r\n> Probably, as Tsunakawa-san also suggested, it’s not impossible to\r\n> implement these APIs in postgres_fdw while guaranteeing not to happen\r\n> any error, although not sure the code complexity. So I think the first\r\n> condition may be true but not sure about the second assumption,\r\n> particularly about the interruptible part.\r\n\r\nYeah, I expect the commit of the second phase should not be difficult for the FDW developer.\r\n\r\nAs for the cancellation during commit retry, I don't think we necessarily have to make the TM responsible for retrying the commits. Many DBMSs have their own timeout functionality such as connection timeout, socket timeout, and statement timeout. Users can set those parameters in the foreign server options based on how long the end user can wait. That is, TM calls FDW's commit routine just once.\r\n\r\nIf the TM makes efforts to retry commits, the duration would be from a few seconds to 30 seconds. Then, we can hold back the cancellation during that period.\r\n\r\n\r\n> I thought we could support both ideas to get their pros; supporting\r\n> Tsunakawa-san's idea and then my idea if necessary, and FDW can choose\r\n> whether to ask the resolver process to perform 2nd phase of 2PC or\r\n> not. But it's not a good idea in terms of complexity.\r\n\r\nI don't feel the need for leaving the commit to the resolver during normal operation.\r\n\r\n\r\n seems like if failed to resolve, the backend would return an\r\n> acknowledgment of COMMIT to the client and the resolver process\r\n> resolves foreign prepared transactions in the background. So we can\r\n> ensure that the distributed transaction is completed at the time when\r\n> the client got an acknowledgment of COMMIT if 2nd phase of 2PC is\r\n> successfully completed in the first attempts. OTOH, if it failed for\r\n> whatever reason, there is no such guarantee. From an optimistic\r\n> perspective, i.g., the failures are unlikely to happen, it will work\r\n> well but IMO it’s not uncommon to fail to resolve foreign transactions\r\n> due to network issue, especially in an unreliable network environment\r\n> for example geo-distributed database. So I think it will end up\r\n> requiring the client to check if preceding distributed transactions\r\n> are completed or not in order to see the results of these\r\n> transactions.\r\n\r\nThat issue exists with any method, doesn't it?\r\n\r\n\r\n Regards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 8 Oct 2020 09:05:27 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, 8 Oct 2020 at 18:05, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> Sorry to be late to respond. (My PC is behaving strangely after upgrading Win10 2004)\n>\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> > After more thoughts on Tsunakawa-san’s idea it seems to need the\n> > following conditions:\n> >\n> > * At least postgres_fdw is viable to implement these APIs while\n> > guaranteeing not to happen any error.\n> > * A certain number of FDWs (or majority of FDWs) can do that in a\n> > similar way to postgres_fdw by using the guideline and probably\n> > postgres_fdw as a reference.\n> >\n> > These are necessary for FDW implementors to implement APIs while\n> > following the guideline and for the core to trust them.\n> >\n> > As far as postgres_fdw goes, what we need to do when committing a\n> > foreign transaction resolution is to get a connection from the\n> > connection cache or create and connect if not found, construct a SQL\n> > query (COMMIT/ROLLBACK PREPARED with identifier) using a fixed-size\n> > buffer, send the query, and get the result. The possible place to\n> > raise an error is limited. In case of failures such as connection\n> > error FDW can return false to the core along with a flag indicating to\n> > ask the core retry. Then the core will retry to resolve foreign\n> > transactions after some sleep. OTOH if FDW sized up that there is no\n> > hope of resolving the foreign transaction, it also could return false\n> > to the core along with another flag indicating to remove the entry and\n> > not to retry. Also, the transaction resolution by FDW needs to be\n> > cancellable (interruptible) but cannot use CHECK_FOR_INTERRUPTS().\n> >\n> > Probably, as Tsunakawa-san also suggested, it’s not impossible to\n> > implement these APIs in postgres_fdw while guaranteeing not to happen\n> > any error, although not sure the code complexity. So I think the first\n> > condition may be true but not sure about the second assumption,\n> > particularly about the interruptible part.\n>\n> Yeah, I expect the commit of the second phase should not be difficult for the FDW developer.\n>\n> As for the cancellation during commit retry, I don't think we necessarily have to make the TM responsible for retrying the commits. Many DBMSs have their own timeout functionality such as connection timeout, socket timeout, and statement timeout.\n> Users can set those parameters in the foreign server options based on how long the end user can wait. That is, TM calls FDW's commit routine just once.\n\nWhat about temporary network failures? I think there are users who\ndon't want to give up resolving foreign transactions failed due to a\ntemporary network failure. Or even they might want to wait for\ntransaction completion until they send a cancel request. If we want to\ncall the commit routine only once and therefore want FDW to retry\nconnecting the foreign server within the call, it means we require all\nFDW implementors to write a retry loop code that is interruptible and\nensures not to raise an error, which increases difficulty.\n\nAlso, what if the user sets the statement timeout to 60 sec and they\nwant to cancel the waits after 5 sec by pressing ctl-C? You mentioned\nthat client libraries of other DBMSs don't have asynchronous execution\nfunctionality. If the SQL execution function is not interruptible, the\nuser will end up waiting for 60 sec, which seems not good.\n\n> If the TM makes efforts to retry commits, the duration would be from a few seconds to 30 seconds. Then, we can hold back the cancellation during that period.\n>\n>\n> > I thought we could support both ideas to get their pros; supporting\n> > Tsunakawa-san's idea and then my idea if necessary, and FDW can choose\n> > whether to ask the resolver process to perform 2nd phase of 2PC or\n> > not. But it's not a good idea in terms of complexity.\n>\n> I don't feel the need for leaving the commit to the resolver during normal operation.\n\nI meant it's for FDWs that cannot guarantee not to happen error during\nresolution.\n\n> seems like if failed to resolve, the backend would return an\n> > acknowledgment of COMMIT to the client and the resolver process\n> > resolves foreign prepared transactions in the background. So we can\n> > ensure that the distributed transaction is completed at the time when\n> > the client got an acknowledgment of COMMIT if 2nd phase of 2PC is\n> > successfully completed in the first attempts. OTOH, if it failed for\n> > whatever reason, there is no such guarantee. From an optimistic\n> > perspective, i.g., the failures are unlikely to happen, it will work\n> > well but IMO it’s not uncommon to fail to resolve foreign transactions\n> > due to network issue, especially in an unreliable network environment\n> > for example geo-distributed database. So I think it will end up\n> > requiring the client to check if preceding distributed transactions\n> > are completed or not in order to see the results of these\n> > transactions.\n>\n> That issue exists with any method, doesn't it?\n\nYes, but if we don’t retry to resolve foreign transactions at all on\nan unreliable network environment, the user might end up requiring\nevery transaction to check the status of foreign transactions of the\nprevious distributed transaction before starts. If we allow to do\nretry, I guess we ease that somewhat.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 8 Oct 2020 21:47:59 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> What about temporary network failures? I think there are users who\r\n> don't want to give up resolving foreign transactions failed due to a\r\n> temporary network failure. Or even they might want to wait for\r\n> transaction completion until they send a cancel request. If we want to\r\n> call the commit routine only once and therefore want FDW to retry\r\n> connecting the foreign server within the call, it means we require all\r\n> FDW implementors to write a retry loop code that is interruptible and\r\n> ensures not to raise an error, which increases difficulty.\r\n>\r\n> Yes, but if we don’t retry to resolve foreign transactions at all on\r\n> an unreliable network environment, the user might end up requiring\r\n> every transaction to check the status of foreign transactions of the\r\n> previous distributed transaction before starts. If we allow to do\r\n> retry, I guess we ease that somewhat.\r\n\r\nOK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\r\n\r\nThen, we can have a commit retry timeout or retry count like the following WebLogic manual says. (I couldn't quickly find the English manual, so below is in Japanese. I quoted some text that got through machine translation, which appears a bit strange.)\r\n\r\nhttps://docs.oracle.com/cd/E92951_01/wls/WLJTA/trxcon.htm\r\n--------------------------------------------------\r\nAbandon timeout\r\nSpecifies the maximum time (in seconds) that the transaction manager attempts to complete the second phase of a two-phase commit transaction.\r\n\r\nIn the second phase of a two-phase commit transaction, the transaction manager attempts to complete the transaction until all resource managers indicate that the transaction is complete. After the abort transaction timer expires, no attempt is made to resolve the transaction. If the transaction enters a ready state before it is destroyed, the transaction manager rolls back the transaction and releases the held lock on behalf of the destroyed transaction.\r\n--------------------------------------------------\r\n\r\n\r\n\r\n> Also, what if the user sets the statement timeout to 60 sec and they\r\n> want to cancel the waits after 5 sec by pressing ctl-C? You mentioned\r\n> that client libraries of other DBMSs don't have asynchronous execution\r\n> functionality. If the SQL execution function is not interruptible, the\r\n> user will end up waiting for 60 sec, which seems not good.\r\n\r\nFDW functions can be uninterruptible in general, aren't they? We experienced that odbc_fdw didn't allow cancellation of SQL execution.\r\n\r\n\r\n Regards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 9 Oct 2020 02:33:37 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Fri, 9 Oct 2020 02:33:37 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \r\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> > What about temporary network failures? I think there are users who\r\n> > don't want to give up resolving foreign transactions failed due to a\r\n> > temporary network failure. Or even they might want to wait for\r\n> > transaction completion until they send a cancel request. If we want to\r\n> > call the commit routine only once and therefore want FDW to retry\r\n> > connecting the foreign server within the call, it means we require all\r\n> > FDW implementors to write a retry loop code that is interruptible and\r\n> > ensures not to raise an error, which increases difficulty.\r\n> >\r\n> > Yes, but if we don’t retry to resolve foreign transactions at all on\r\n> > an unreliable network environment, the user might end up requiring\r\n> > every transaction to check the status of foreign transactions of the\r\n> > previous distributed transaction before starts. If we allow to do\r\n> > retry, I guess we ease that somewhat.\r\n> \r\n> OK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\r\n\r\nI should missing something, though...\r\n\r\nI don't understand why we hate ERRORs from fdw-2pc-commit routine so\r\nmuch. I think remote-commits should be performed before local commit\r\npasses the point-of-no-return and the v26-0002 actually places\r\nAtEOXact_FdwXact() before the critical section.\r\n\r\n(FWIW, I think remote commits should be performed by backends, not by\r\nanother process, because backends should wait for all remote-commits\r\nto end anyway and it is simpler. If we want to multiple remote-commits\r\nin parallel, we could do that by adding some async-waiting interface.)\r\n\r\n> Then, we can have a commit retry timeout or retry count like the following WebLogic manual says. (I couldn't quickly find the English manual, so below is in Japanese. I quoted some text that got through machine translation, which appears a bit strange.)\r\n> \r\n> https://docs.oracle.com/cd/E92951_01/wls/WLJTA/trxcon.htm\r\n> --------------------------------------------------\r\n> Abandon timeout\r\n> Specifies the maximum time (in seconds) that the transaction manager attempts to complete the second phase of a two-phase commit transaction.\r\n> \r\n> In the second phase of a two-phase commit transaction, the transaction manager attempts to complete the transaction until all resource managers indicate that the transaction is complete. After the abort transaction timer expires, no attempt is made to resolve the transaction. If the transaction enters a ready state before it is destroyed, the transaction manager rolls back the transaction and releases the held lock on behalf of the destroyed transaction.\r\n> --------------------------------------------------\r\n\r\nThat's not a retry timeout but a timeout for total time of all\r\n2nd-phase-commits. But I think it would be sufficient. Even if an\r\nfdw could retry 2pc-commit, it's a matter of that fdw and the core has\r\nnothing to do with.\r\n\r\n> > Also, what if the user sets the statement timeout to 60 sec and they\r\n> > want to cancel the waits after 5 sec by pressing ctl-C? You mentioned\r\n> > that client libraries of other DBMSs don't have asynchronous execution\r\n> > functionality. If the SQL execution function is not interruptible, the\r\n> > user will end up waiting for 60 sec, which seems not good.\r\n\r\nI think fdw-2pc-commit can be interruptible safely as far as we run\r\nthe remote commits before entring critical section of local commit.\r\n\r\n> FDW functions can be uninterruptible in general, aren't they? We experienced that odbc_fdw didn't allow cancellation of SQL execution.\r\n\r\nAt least postgres_fdw is interruptible while waiting the remote.\r\n\r\ncreate view lt as select 1 as slp from (select pg_sleep(10)) t;\r\ncreate foreign table ft(slp int) server sv1 options (table_name 'lt');\r\nselect * from ft;\r\n^CCancel request sent\r\nERROR: canceling statement due to user request\r\n\r\nregrds.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Fri, 09 Oct 2020 14:55:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\n> I don't understand why we hate ERRORs from fdw-2pc-commit routine so\r\n> much. I think remote-commits should be performed before local commit\r\n> passes the point-of-no-return and the v26-0002 actually places\r\n> AtEOXact_FdwXact() before the critical section.\r\n\r\nI don't hate ERROR, but it would be simpler and understandable for the FDW commit routine to just return control to the caller (TM) and let TM do whatever is appropriate (asks the resolver to handle the failed commit, and continues to request next FDW to commit.)\r\n\r\n\r\n> > https://docs.oracle.com/cd/E92951_01/wls/WLJTA/trxcon.htm\r\n> > --------------------------------------------------\r\n> > Abandon timeout\r\n> > Specifies the maximum time (in seconds) that the transaction manager\r\n> attempts to complete the second phase of a two-phase commit transaction.\r\n> >\r\n> > In the second phase of a two-phase commit transaction, the transaction\r\n> manager attempts to complete the transaction until all resource managers\r\n> indicate that the transaction is complete. After the abort transaction timer\r\n> expires, no attempt is made to resolve the transaction. If the transaction enters\r\n> a ready state before it is destroyed, the transaction manager rolls back the\r\n> transaction and releases the held lock on behalf of the destroyed transaction.\r\n> > --------------------------------------------------\r\n> \r\n> That's not a retry timeout but a timeout for total time of all\r\n> 2nd-phase-commits. But I think it would be sufficient. Even if an\r\n> fdw could retry 2pc-commit, it's a matter of that fdw and the core has\r\n> nothing to do with.\r\n\r\nYeah, the WebLogic documentation doesn't say whether it performs retries during the timeout period. I just cited as an example that has a timeout parameter for the second phase of 2PC.\r\n\r\n\r\n> At least postgres_fdw is interruptible while waiting the remote.\r\n> \r\n> create view lt as select 1 as slp from (select pg_sleep(10)) t;\r\n> create foreign table ft(slp int) server sv1 options (table_name 'lt');\r\n> select * from ft;\r\n> ^CCancel request sent\r\n> ERROR: canceling statement due to user request\r\n\r\nI'm afraid the cancellation doesn't work while postgres_fdw is trying to connect to a down server. Also, Postgres manual doesn't say about cancellation, so we cannot expect FDWs to respond to user's cancel request.\r\n\r\n\r\n Regards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 9 Oct 2020 07:27:30 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 9 Oct 2020 at 11:33, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > What about temporary network failures? I think there are users who\n> > don't want to give up resolving foreign transactions failed due to a\n> > temporary network failure. Or even they might want to wait for\n> > transaction completion until they send a cancel request. If we want to\n> > call the commit routine only once and therefore want FDW to retry\n> > connecting the foreign server within the call, it means we require all\n> > FDW implementors to write a retry loop code that is interruptible and\n> > ensures not to raise an error, which increases difficulty.\n> >\n> > Yes, but if we don’t retry to resolve foreign transactions at all on\n> > an unreliable network environment, the user might end up requiring\n> > every transaction to check the status of foreign transactions of the\n> > previous distributed transaction before starts. If we allow to do\n> > retry, I guess we ease that somewhat.\n>\n> OK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\n\nWell, I agree that it's not mandatory. I think it's better if the user\ncan choose.\n\nI also doubt how useful the per-foreign-server timeout setting you\nmentioned before. For example, suppose the transaction involves with\nthree foreign servers that have different timeout setting, what if the\nbackend failed to commit on the first one of the server due to\ntimeout? Does it attempt to commit on the other two servers? Or does\nit give up and return the control to the client? In the former case,\nwhat if the backend failed again on one of the other two servers due\nto timeout? The backend might end up waiting for all timeouts and in\npractice the user is not aware of how many servers are involved with\nthe transaction, for example in a sharding. So It seems to be hard to\npredict the total timeout. In the latter case, the backend might\nsucceed to commit on the other two nodes. Also, the timeout setting of\nthe first foreign server virtually is used as the whole foreign\ntransaction resolution timeout. However, the user cannot control the\norder of resolution. So again it seems to be hard for the user to\npredict the timeout. So If we have a timeout mechanism, I think it's\nbetter if the user can control the timeout for each transaction.\nProbably the same is true for the retry.\n\n>\n> Then, we can have a commit retry timeout or retry count like the following WebLogic manual says. (I couldn't quickly find the English manual, so below is in Japanese. I quoted some text that got through machine translation, which appears a bit strange.)\n>\n> https://docs.oracle.com/cd/E92951_01/wls/WLJTA/trxcon.htm\n> --------------------------------------------------\n> Abandon timeout\n> Specifies the maximum time (in seconds) that the transaction manager attempts to complete the second phase of a two-phase commit transaction.\n>\n> In the second phase of a two-phase commit transaction, the transaction manager attempts to complete the transaction until all resource managers indicate that the transaction is complete. After the abort transaction timer expires, no attempt is made to resolve the transaction. If the transaction enters a ready state before it is destroyed, the transaction manager rolls back the transaction and releases the held lock on behalf of the destroyed transaction.\n> --------------------------------------------------\n\nYeah per-transaction timeout for 2nd phase of 2PC seems a good idea.\n\n>\n>\n>\n> > Also, what if the user sets the statement timeout to 60 sec and they\n> > want to cancel the waits after 5 sec by pressing ctl-C? You mentioned\n> > that client libraries of other DBMSs don't have asynchronous execution\n> > functionality. If the SQL execution function is not interruptible, the\n> > user will end up waiting for 60 sec, which seems not good.\n>\n> FDW functions can be uninterruptible in general, aren't they? We experienced that odbc_fdw didn't allow cancellation of SQL execution.\n\nFor example in postgres_fdw, it executes a SQL in asynchronous manner\nusing by PQsendQuery(), PQconsumeInput() and PQgetResult() and so on\n(see do_sql_command() and pgfdw_get_result()). Therefore it the user\npressed ctl-C, the remote query would be canceled and raise an ERROR.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 9 Oct 2020 20:01:23 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, 9 Oct 2020 at 14:55, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 9 Oct 2020 02:33:37 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\n> > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > What about temporary network failures? I think there are users who\n> > > don't want to give up resolving foreign transactions failed due to a\n> > > temporary network failure. Or even they might want to wait for\n> > > transaction completion until they send a cancel request. If we want to\n> > > call the commit routine only once and therefore want FDW to retry\n> > > connecting the foreign server within the call, it means we require all\n> > > FDW implementors to write a retry loop code that is interruptible and\n> > > ensures not to raise an error, which increases difficulty.\n> > >\n> > > Yes, but if we don’t retry to resolve foreign transactions at all on\n> > > an unreliable network environment, the user might end up requiring\n> > > every transaction to check the status of foreign transactions of the\n> > > previous distributed transaction before starts. If we allow to do\n> > > retry, I guess we ease that somewhat.\n> >\n> > OK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\n>\n> I should missing something, though...\n>\n> I don't understand why we hate ERRORs from fdw-2pc-commit routine so\n> much. I think remote-commits should be performed before local commit\n> passes the point-of-no-return and the v26-0002 actually places\n> AtEOXact_FdwXact() before the critical section.\n>\n\nSo you're thinking the following sequence?\n\n1. Prepare all foreign transactions.\n2. Commit the all prepared foreign transactions.\n3. Commit the local transaction.\n\nSuppose we have the backend process call the commit routine, what if\none of FDW raises an ERROR during committing the foreign transaction\nafter committing other foreign transactions? The transaction will end\nup with an abort but some foreign transactions are already committed.\nAlso, what if the backend process failed to commit the local\ntransaction? Since it already committed all foreign transactions it\ncannot ensure the global atomicity in this case too. Therefore, I\nthink we should commit the distributed transactions in the following\nsequence:\n\n1. Prepare all foreign transactions.\n2. Commit the local transaction.\n3. Commit the all prepared foreign transactions.\n\nBut this is still not a perfect solution. If we have the backend\nprocess call the commit routine and an error happens during executing\nthe commit routine of an FDW (i.g., at step 3) it's too late to report\nan error to the client because we already committed the local\ntransaction. So the current solution is to have a background process\ncommit the foreign transactions so that the backend can just wait\nwithout the possibility of errors.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 9 Oct 2020 21:45:57 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> I also doubt how useful the per-foreign-server timeout setting you\r\n> mentioned before. For example, suppose the transaction involves with\r\n> three foreign servers that have different timeout setting, what if the\r\n> backend failed to commit on the first one of the server due to\r\n> timeout? Does it attempt to commit on the other two servers? Or does\r\n> it give up and return the control to the client? In the former case,\r\n> what if the backend failed again on one of the other two servers due\r\n> to timeout? The backend might end up waiting for all timeouts and in\r\n> practice the user is not aware of how many servers are involved with\r\n> the transaction, for example in a sharding. So It seems to be hard to\r\n> predict the total timeout. In the latter case, the backend might\r\n> succeed to commit on the other two nodes. Also, the timeout setting of\r\n> the first foreign server virtually is used as the whole foreign\r\n> transaction resolution timeout. However, the user cannot control the\r\n> order of resolution. So again it seems to be hard for the user to\r\n> predict the timeout. So If we have a timeout mechanism, I think it's\r\n> better if the user can control the timeout for each transaction.\r\n> Probably the same is true for the retry.\r\n\r\nI agree that the user can control the timeout per transaction, not per FDW. I was just not sure if the Postgres core can define the timeout parameter and the FDWs can follow its setting. However, JTA defines a transaction timeout API (not commit timeout, though), and each RM can choose to implement them. So I think we can define the parameter and/or routines for the timeout in core likewise.\r\n\r\n\r\n--------------------------------------------------\r\npublic interface javax.transaction.xa.XAResource \r\n\r\nint getTransactionTimeout() throws XAException \r\nThis method returns the transaction timeout value set for this XAResourceinstance. If XAResource.\r\nsetTransactionTimeout was not use prior to invoking this method, the return value is the \r\ndefault timeout set for the resource manager; otherwise, the value used in the previous setTransactionTimeoutcall \r\nis returned. \r\n\r\nThrows: XAException \r\nAn error has occurred. Possible exception values are: XAER_RMERR, XAER_RMFAIL. \r\n\r\nReturns: \r\nThe transaction timeout values in seconds. \r\n\r\nboolean setTransactionTimeout(int seconds) throws XAException \r\nThis method sets the transaction timeout value for this XAResourceinstance. Once set, this timeout value \r\nis effective until setTransactionTimeoutis invoked again with a different value. To reset the timeout \r\nvalue to the default value used by the resource manager, set the value to zero. \r\n\r\nIf the timeout operation is performed successfully, the method returns true; otherwise false. If a resource \r\nmanager does not support transaction timeout value to be set explicitly, this method returns false. \r\n\r\nParameters:\r\n\r\n seconds \r\nAn positive integer specifying the timout value in seconds. Zero resets the transaction timeout \r\nvalue to the default one used by the resource manager. A negative value results in XAException \r\nto be thrown with XAER_INVAL error code. \r\n\r\nReturns: \r\ntrue if transaction timeout value is set successfully; otherwise false. \r\n\r\nThrows: XAException \r\nAn error has occurred. Possible exception values are: XAER_RMERR, XAER_RMFAIL, or \r\nXAER_INVAL. \r\n--------------------------------------------------\r\n\r\n\r\n\r\n> For example in postgres_fdw, it executes a SQL in asynchronous manner\r\n> using by PQsendQuery(), PQconsumeInput() and PQgetResult() and so on\r\n> (see do_sql_command() and pgfdw_get_result()). Therefore it the user\r\n> pressed ctl-C, the remote query would be canceled and raise an ERROR.\r\n\r\nYeah, as I replied to Horiguchi-san, postgres_fdw can cancel queries. But postgres_fdw is not ready to cancel connection establishment, is it? At present, the user needs to set connect_timeout parameter on the foreign server to a reasonable short time so that it can respond quickly to cancellation requests. Alternately, we can modify postgres_fdw to use libpq's asynchronous connect functions.\r\n\r\nAnother issue is that the Postgres manual does not stipulate anything about cancellation of FDW processing. That's why I said that the current FDW does not support cancellation in general. Of course, I think we can stipulate the ability to cancel processing in the FDW interface.\r\n\r\n\r\n Regards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 12 Oct 2020 02:07:54 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, 12 Oct 2020 at 11:08, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > I also doubt how useful the per-foreign-server timeout setting you\n> > mentioned before. For example, suppose the transaction involves with\n> > three foreign servers that have different timeout setting, what if the\n> > backend failed to commit on the first one of the server due to\n> > timeout? Does it attempt to commit on the other two servers? Or does\n> > it give up and return the control to the client? In the former case,\n> > what if the backend failed again on one of the other two servers due\n> > to timeout? The backend might end up waiting for all timeouts and in\n> > practice the user is not aware of how many servers are involved with\n> > the transaction, for example in a sharding. So It seems to be hard to\n> > predict the total timeout. In the latter case, the backend might\n> > succeed to commit on the other two nodes. Also, the timeout setting of\n> > the first foreign server virtually is used as the whole foreign\n> > transaction resolution timeout. However, the user cannot control the\n> > order of resolution. So again it seems to be hard for the user to\n> > predict the timeout. So If we have a timeout mechanism, I think it's\n> > better if the user can control the timeout for each transaction.\n> > Probably the same is true for the retry.\n>\n> I agree that the user can control the timeout per transaction, not per FDW. I was just not sure if the Postgres core can define the timeout parameter and the FDWs can follow its setting. However, JTA defines a transaction timeout API (not commit timeout, though), and each RM can choose to implement them. So I think we can define the parameter and/or routines for the timeout in core likewise.\n\nI was thinking to have a GUC timeout parameter like statement_timeout.\nThe backend waits for the setting value when resolving foreign\ntransactions. But this idea seems different. FDW can set its timeout\nvia a transaction timeout API, is that right? But even if FDW can set\nthe timeout using a transaction timeout API, the problem that client\nlibraries for some DBMS don't support interruptible functions still\nremains. The user can set a short time to the timeout but it also\nleads to unnecessary timeouts. Thoughts?\n\n>\n>\n> --------------------------------------------------\n> public interface javax.transaction.xa.XAResource\n>\n> int getTransactionTimeout() throws XAException\n> This method returns the transaction timeout value set for this XAResourceinstance. If XAResource.\n> setTransactionTimeout was not use prior to invoking this method, the return value is the\n> default timeout set for the resource manager; otherwise, the value used in the previous setTransactionTimeoutcall\n> is returned.\n>\n> Throws: XAException\n> An error has occurred. Possible exception values are: XAER_RMERR, XAER_RMFAIL.\n>\n> Returns:\n> The transaction timeout values in seconds.\n>\n> boolean setTransactionTimeout(int seconds) throws XAException\n> This method sets the transaction timeout value for this XAResourceinstance. Once set, this timeout value\n> is effective until setTransactionTimeoutis invoked again with a different value. To reset the timeout\n> value to the default value used by the resource manager, set the value to zero.\n>\n> If the timeout operation is performed successfully, the method returns true; otherwise false. If a resource\n> manager does not support transaction timeout value to be set explicitly, this method returns false.\n>\n> Parameters:\n>\n> seconds\n> An positive integer specifying the timout value in seconds. Zero resets the transaction timeout\n> value to the default one used by the resource manager. A negative value results in XAException\n> to be thrown with XAER_INVAL error code.\n>\n> Returns:\n> true if transaction timeout value is set successfully; otherwise false.\n>\n> Throws: XAException\n> An error has occurred. Possible exception values are: XAER_RMERR, XAER_RMFAIL, or\n> XAER_INVAL.\n> --------------------------------------------------\n>\n>\n>\n> > For example in postgres_fdw, it executes a SQL in asynchronous manner\n> > using by PQsendQuery(), PQconsumeInput() and PQgetResult() and so on\n> > (see do_sql_command() and pgfdw_get_result()). Therefore it the user\n> > pressed ctl-C, the remote query would be canceled and raise an ERROR.\n>\n> Yeah, as I replied to Horiguchi-san, postgres_fdw can cancel queries. But postgres_fdw is not ready to cancel connection establishment, is it? At present, the user needs to set connect_timeout parameter on the foreign server to a reasonable short time so that it can respond quickly to cancellation requests. Alternately, we can modify postgres_fdw to use libpq's asynchronous connect functions.\n\nYes, I think using asynchronous connect functions seems a good idea.\n\n> Another issue is that the Postgres manual does not stipulate anything about cancellation of FDW processing. That's why I said that the current FDW does not support cancellation in general. Of course, I think we can stipulate the ability to cancel processing in the FDW interface.\n\nYeah, it's the FDW developer responsibility to write the code to\nexecute the remote SQL that is interruptible. +1 for adding that to\nthe doc.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 12 Oct 2020 16:10:44 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> I was thinking to have a GUC timeout parameter like statement_timeout.\r\n> The backend waits for the setting value when resolving foreign\r\n> transactions.\r\n\r\nMe too.\r\n\r\n\r\n> But this idea seems different. FDW can set its timeout\r\n> via a transaction timeout API, is that right?\r\n\r\nI'm not perfectly sure about how the TM( application server works) , but probably no. The TM has a configuration parameter for transaction timeout, and the TM calls XAResource.setTransactionTimeout() with that or smaller value for the argument.\r\n\r\n\r\n> But even if FDW can set\r\n> the timeout using a transaction timeout API, the problem that client\r\n> libraries for some DBMS don't support interruptible functions still\r\n> remains. The user can set a short time to the timeout but it also\r\n> leads to unnecessary timeouts. Thoughts?\r\n\r\nUnfortunately, I'm afraid we can do nothing about it. If the DBMS's client library doesn't support cancellation (e.g. doesn't respond to Ctrl+C or provide a function that cancel processing in pgorogss), then the Postgres user just finds that he can't cancel queries (just like we experienced with odbc_fdw.)\r\n\r\n\r\n Regards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 12 Oct 2020 08:19:05 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Fri, 9 Oct 2020 21:45:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \r\n> On Fri, 9 Oct 2020 at 14:55, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Fri, 9 Oct 2020 02:33:37 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\r\n> > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> > > > What about temporary network failures? I think there are users who\r\n> > > > don't want to give up resolving foreign transactions failed due to a\r\n> > > > temporary network failure. Or even they might want to wait for\r\n> > > > transaction completion until they send a cancel request. If we want to\r\n> > > > call the commit routine only once and therefore want FDW to retry\r\n> > > > connecting the foreign server within the call, it means we require all\r\n> > > > FDW implementors to write a retry loop code that is interruptible and\r\n> > > > ensures not to raise an error, which increases difficulty.\r\n> > > >\r\n> > > > Yes, but if we don’t retry to resolve foreign transactions at all on\r\n> > > > an unreliable network environment, the user might end up requiring\r\n> > > > every transaction to check the status of foreign transactions of the\r\n> > > > previous distributed transaction before starts. If we allow to do\r\n> > > > retry, I guess we ease that somewhat.\r\n> > >\r\n> > > OK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\r\n> >\r\n> > I should missing something, though...\r\n> >\r\n> > I don't understand why we hate ERRORs from fdw-2pc-commit routine so\r\n> > much. I think remote-commits should be performed before local commit\r\n> > passes the point-of-no-return and the v26-0002 actually places\r\n> > AtEOXact_FdwXact() before the critical section.\r\n> >\r\n> \r\n> So you're thinking the following sequence?\r\n> \r\n> 1. Prepare all foreign transactions.\r\n> 2. Commit the all prepared foreign transactions.\r\n> 3. Commit the local transaction.\r\n> \r\n> Suppose we have the backend process call the commit routine, what if\r\n> one of FDW raises an ERROR during committing the foreign transaction\r\n> after committing other foreign transactions? The transaction will end\r\n> up with an abort but some foreign transactions are already committed.\r\n\r\nOk, I understand what you are aiming.\r\n\r\nIt is apparently out of the focus of the two-phase commit\r\nprotocol. Each FDW server can try to keep the contract as far as its\r\nability reaches, but in the end such kind of failure is\r\ninevitable. Even if we require FDW developers not to respond until a\r\n2pc-commit succeeds, that just leads the whole FDW-cluster to freeze\r\neven not in an extremely bad case.\r\n\r\nWe have no other choices than shutting the server down (then the\r\nsucceeding server start removes the garbage commits) or continueing\r\nworking leaving some information in a system storage (or reverting the\r\ngarbage commits). What we can do in that case is to provide a\r\nautomated way to resolve the inconsistency.\r\n\r\n> Also, what if the backend process failed to commit the local\r\n> transaction? Since it already committed all foreign transactions it\r\n> cannot ensure the global atomicity in this case too. Therefore, I\r\n> think we should commit the distributed transactions in the following\r\n> sequence:\r\n\r\nDitto. It's out of the range of 2pc. Using p2c for local transaction\r\ncould reduce that kind of failure but I'm not sure. 3pc, 4pc ...npc\r\ncould reduce the probability but can't elimite failure cases.\r\n\r\n> 1. Prepare all foreign transactions.\r\n> 2. Commit the local transaction.\r\n> 3. Commit the all prepared foreign transactions.\r\n> \r\n> But this is still not a perfect solution. If we have the backend\r\n\r\n2pc is not a perfect solution in the first place. Attaching a similar\r\nphase to it cannot make it \"perfect\".\r\n\r\n> process call the commit routine and an error happens during executing\r\n> the commit routine of an FDW (i.g., at step 3) it's too late to report\r\n> an error to the client because we already committed the local\r\n> transaction. So the current solution is to have a background process\r\n> commit the foreign transactions so that the backend can just wait\r\n> without the possibility of errors.\r\n\r\nWhatever process tries to complete a transaction, the client must wait\r\nfor the transaction to end and anyway that's just a freeze in the\r\nclient's view, unless you intended to respond to local commit before\r\nall participant complete.\r\n\r\nI don't think most of client applications wouldn't wait for frozen\r\nserver forever. We have the same issue at the time the client decided\r\nto give up the transacton, or the leader session is killed.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 13 Oct 2020 10:00:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Tue, 13 Oct 2020 at 10:00, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 9 Oct 2020 21:45:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > On Fri, 9 Oct 2020 at 14:55, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Fri, 9 Oct 2020 02:33:37 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\n> > > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > > > What about temporary network failures? I think there are users who\n> > > > > don't want to give up resolving foreign transactions failed due to a\n> > > > > temporary network failure. Or even they might want to wait for\n> > > > > transaction completion until they send a cancel request. If we want to\n> > > > > call the commit routine only once and therefore want FDW to retry\n> > > > > connecting the foreign server within the call, it means we require all\n> > > > > FDW implementors to write a retry loop code that is interruptible and\n> > > > > ensures not to raise an error, which increases difficulty.\n> > > > >\n> > > > > Yes, but if we don’t retry to resolve foreign transactions at all on\n> > > > > an unreliable network environment, the user might end up requiring\n> > > > > every transaction to check the status of foreign transactions of the\n> > > > > previous distributed transaction before starts. If we allow to do\n> > > > > retry, I guess we ease that somewhat.\n> > > >\n> > > > OK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\n> > >\n> > > I should missing something, though...\n> > >\n> > > I don't understand why we hate ERRORs from fdw-2pc-commit routine so\n> > > much. I think remote-commits should be performed before local commit\n> > > passes the point-of-no-return and the v26-0002 actually places\n> > > AtEOXact_FdwXact() before the critical section.\n> > >\n> >\n> > So you're thinking the following sequence?\n> >\n> > 1. Prepare all foreign transactions.\n> > 2. Commit the all prepared foreign transactions.\n> > 3. Commit the local transaction.\n> >\n> > Suppose we have the backend process call the commit routine, what if\n> > one of FDW raises an ERROR during committing the foreign transaction\n> > after committing other foreign transactions? The transaction will end\n> > up with an abort but some foreign transactions are already committed.\n>\n> Ok, I understand what you are aiming.\n>\n> It is apparently out of the focus of the two-phase commit\n> protocol. Each FDW server can try to keep the contract as far as its\n> ability reaches, but in the end such kind of failure is\n> inevitable. Even if we require FDW developers not to respond until a\n> 2pc-commit succeeds, that just leads the whole FDW-cluster to freeze\n> even not in an extremely bad case.\n>\n> We have no other choices than shutting the server down (then the\n> succeeding server start removes the garbage commits) or continueing\n> working leaving some information in a system storage (or reverting the\n> garbage commits). What we can do in that case is to provide a\n> automated way to resolve the inconsistency.\n>\n> > Also, what if the backend process failed to commit the local\n> > transaction? Since it already committed all foreign transactions it\n> > cannot ensure the global atomicity in this case too. Therefore, I\n> > think we should commit the distributed transactions in the following\n> > sequence:\n>\n> Ditto. It's out of the range of 2pc. Using p2c for local transaction\n> could reduce that kind of failure but I'm not sure. 3pc, 4pc ...npc\n> could reduce the probability but can't elimite failure cases.\n\nIMO the problems I mentioned arise from the fact that the above\nsequence doesn't really follow the 2pc protocol in the first place.\n\nWe can think of the fact that we commit the local transaction without\npreparation while preparing foreign transactions as that we’re using\nthe 2pc with last resource transaction optimization (or last agent\noptimization)[1]. That is, we prepare all foreign transactions first\nand the local node is always the last resource to process. At this\ntime, the outcome of the distributed transaction completely depends on\nthe fate of the last resource (i.g., the local transaction). If it\nfails, the distributed transaction must be abort by rolling back\nprepared foreign transactions. OTOH, if it succeeds, all prepared\nforeign transaction must be committed. Therefore, we don’t need to\nprepare the last resource and can commit it. In this way, if we want\nto commit the local transaction without preparation, the local\ntransaction must be committed at last. But since the above sequence\ndoesn’t follow this protocol, we will have such problems. I think if\nwe follow the 2pc properly, such basic failures don't happen.\n\n>\n> > 1. Prepare all foreign transactions.\n> > 2. Commit the local transaction.\n> > 3. Commit the all prepared foreign transactions.\n> >\n> > But this is still not a perfect solution. If we have the backend\n>\n> 2pc is not a perfect solution in the first place. Attaching a similar\n> phase to it cannot make it \"perfect\".\n>\n> > process call the commit routine and an error happens during executing\n> > the commit routine of an FDW (i.g., at step 3) it's too late to report\n> > an error to the client because we already committed the local\n> > transaction. So the current solution is to have a background process\n> > commit the foreign transactions so that the backend can just wait\n> > without the possibility of errors.\n>\n> Whatever process tries to complete a transaction, the client must wait\n> for the transaction to end and anyway that's just a freeze in the\n> client's view, unless you intended to respond to local commit before\n> all participant complete.\n\nYes, but the point of using a separate process is that even if FDW\ncode raises an error, the client wanting for transaction resolution\ndoesn't get it and it's interruptible.\n\n[1] https://docs.oracle.com/cd/E13222_01/wls/docs91/jta/llr.html\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 13 Oct 2020 11:56:51 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Tue, 13 Oct 2020 11:56:51 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \r\n> On Tue, 13 Oct 2020 at 10:00, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Fri, 9 Oct 2020 21:45:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\r\n> > > On Fri, 9 Oct 2020 at 14:55, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> > > >\r\n> > > > At Fri, 9 Oct 2020 02:33:37 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\r\n> > > > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> > > > > > What about temporary network failures? I think there are users who\r\n> > > > > > don't want to give up resolving foreign transactions failed due to a\r\n> > > > > > temporary network failure. Or even they might want to wait for\r\n> > > > > > transaction completion until they send a cancel request. If we want to\r\n> > > > > > call the commit routine only once and therefore want FDW to retry\r\n> > > > > > connecting the foreign server within the call, it means we require all\r\n> > > > > > FDW implementors to write a retry loop code that is interruptible and\r\n> > > > > > ensures not to raise an error, which increases difficulty.\r\n> > > > > >\r\n> > > > > > Yes, but if we don’t retry to resolve foreign transactions at all on\r\n> > > > > > an unreliable network environment, the user might end up requiring\r\n> > > > > > every transaction to check the status of foreign transactions of the\r\n> > > > > > previous distributed transaction before starts. If we allow to do\r\n> > > > > > retry, I guess we ease that somewhat.\r\n> > > > >\r\n> > > > > OK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\r\n> > > >\r\n> > > > I should missing something, though...\r\n> > > >\r\n> > > > I don't understand why we hate ERRORs from fdw-2pc-commit routine so\r\n> > > > much. I think remote-commits should be performed before local commit\r\n> > > > passes the point-of-no-return and the v26-0002 actually places\r\n> > > > AtEOXact_FdwXact() before the critical section.\r\n> > > >\r\n> > >\r\n> > > So you're thinking the following sequence?\r\n> > >\r\n> > > 1. Prepare all foreign transactions.\r\n> > > 2. Commit the all prepared foreign transactions.\r\n> > > 3. Commit the local transaction.\r\n> > >\r\n> > > Suppose we have the backend process call the commit routine, what if\r\n> > > one of FDW raises an ERROR during committing the foreign transaction\r\n> > > after committing other foreign transactions? The transaction will end\r\n> > > up with an abort but some foreign transactions are already committed.\r\n> >\r\n> > Ok, I understand what you are aiming.\r\n> >\r\n> > It is apparently out of the focus of the two-phase commit\r\n> > protocol. Each FDW server can try to keep the contract as far as its\r\n> > ability reaches, but in the end such kind of failure is\r\n> > inevitable. Even if we require FDW developers not to respond until a\r\n> > 2pc-commit succeeds, that just leads the whole FDW-cluster to freeze\r\n> > even not in an extremely bad case.\r\n> >\r\n> > We have no other choices than shutting the server down (then the\r\n> > succeeding server start removes the garbage commits) or continueing\r\n> > working leaving some information in a system storage (or reverting the\r\n> > garbage commits). What we can do in that case is to provide a\r\n> > automated way to resolve the inconsistency.\r\n> >\r\n> > > Also, what if the backend process failed to commit the local\r\n> > > transaction? Since it already committed all foreign transactions it\r\n> > > cannot ensure the global atomicity in this case too. Therefore, I\r\n> > > think we should commit the distributed transactions in the following\r\n> > > sequence:\r\n> >\r\n> > Ditto. It's out of the range of 2pc. Using p2c for local transaction\r\n> > could reduce that kind of failure but I'm not sure. 3pc, 4pc ...npc\r\n> > could reduce the probability but can't elimite failure cases.\r\n> \r\n> IMO the problems I mentioned arise from the fact that the above\r\n> sequence doesn't really follow the 2pc protocol in the first place.\r\n> \r\n> We can think of the fact that we commit the local transaction without\r\n> preparation while preparing foreign transactions as that we’re using\r\n> the 2pc with last resource transaction optimization (or last agent\r\n> optimization)[1]. That is, we prepare all foreign transactions first\r\n> and the local node is always the last resource to process. At this\r\n> time, the outcome of the distributed transaction completely depends on\r\n> the fate of the last resource (i.g., the local transaction). If it\r\n> fails, the distributed transaction must be abort by rolling back\r\n> prepared foreign transactions. OTOH, if it succeeds, all prepared\r\n> foreign transaction must be committed. Therefore, we don’t need to\r\n> prepare the last resource and can commit it. In this way, if we want\r\n\r\nThere are cases of commit-failure of a local transaction caused by\r\ntoo-many notifications or by serialization failure.\r\n\r\n> to commit the local transaction without preparation, the local\r\n> transaction must be committed at last. But since the above sequence\r\n> doesn’t follow this protocol, we will have such problems. I think if\r\n> we follow the 2pc properly, such basic failures don't happen.\r\n\r\nTrue. But I haven't suggested that sequence.\r\n\r\n> > > 1. Prepare all foreign transactions.\r\n> > > 2. Commit the local transaction.\r\n> > > 3. Commit the all prepared foreign transactions.\r\n> > >\r\n> > > But this is still not a perfect solution. If we have the backend\r\n> >\r\n> > 2pc is not a perfect solution in the first place. Attaching a similar\r\n> > phase to it cannot make it \"perfect\".\r\n> >\r\n> > > process call the commit routine and an error happens during executing\r\n> > > the commit routine of an FDW (i.g., at step 3) it's too late to report\r\n> > > an error to the client because we already committed the local\r\n> > > transaction. So the current solution is to have a background process\r\n> > > commit the foreign transactions so that the backend can just wait\r\n> > > without the possibility of errors.\r\n> >\r\n> > Whatever process tries to complete a transaction, the client must wait\r\n> > for the transaction to end and anyway that's just a freeze in the\r\n> > client's view, unless you intended to respond to local commit before\r\n> > all participant complete.\r\n> \r\n> Yes, but the point of using a separate process is that even if FDW\r\n> code raises an error, the client wanting for transaction resolution\r\n> doesn't get it and it's interruptible.\r\n> \r\n> [1] https://docs.oracle.com/cd/E13222_01/wls/docs91/jta/llr.html\r\n\r\nI don't get the point. If FDW-commit is called on the same process, an\r\nerror from FDW-commit outright leads to the failure of the current\r\ncommit. Isn't \"the client wanting for transaction resolution\" the\r\nclient of the leader process of the 2pc-commit in the same-process\r\nmodel?\r\n\r\nI should missing something, but postgres_fdw allows query cancelation\r\nat commit time. (But I think it is depends on timing whether the\r\nremote commit is completed or aborted.). Perhaps the feature was\r\nintroduced after the project started?\r\n\r\n> commit ae9bfc5d65123aaa0d1cca9988037489760bdeae\r\n> Author: Robert Haas <rhaas@postgresql.org>\r\n> Date: Wed Jun 7 15:14:55 2017 -0400\r\n> \r\n> postgres_fdw: Allow cancellation of transaction control commands.\r\n\r\nI thought that we are discussing on fdw-errors during the 2pc-commit\r\nphase.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 14 Oct 2020 10:15:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Wed, 14 Oct 2020 at 10:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 13 Oct 2020 11:56:51 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > On Tue, 13 Oct 2020 at 10:00, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Fri, 9 Oct 2020 21:45:57 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > > On Fri, 9 Oct 2020 at 14:55, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > > >\n> > > > > At Fri, 9 Oct 2020 02:33:37 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\n> > > > > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > > > > > What about temporary network failures? I think there are users who\n> > > > > > > don't want to give up resolving foreign transactions failed due to a\n> > > > > > > temporary network failure. Or even they might want to wait for\n> > > > > > > transaction completion until they send a cancel request. If we want to\n> > > > > > > call the commit routine only once and therefore want FDW to retry\n> > > > > > > connecting the foreign server within the call, it means we require all\n> > > > > > > FDW implementors to write a retry loop code that is interruptible and\n> > > > > > > ensures not to raise an error, which increases difficulty.\n> > > > > > >\n> > > > > > > Yes, but if we don’t retry to resolve foreign transactions at all on\n> > > > > > > an unreliable network environment, the user might end up requiring\n> > > > > > > every transaction to check the status of foreign transactions of the\n> > > > > > > previous distributed transaction before starts. If we allow to do\n> > > > > > > retry, I guess we ease that somewhat.\n> > > > > >\n> > > > > > OK. As I said, I'm not against trying to cope with temporary network failure. I just don't think it's mandatory. If the network failure is really temporary and thus recovers soon, then the resolver will be able to commit the transaction soon, too.\n> > > > >\n> > > > > I should missing something, though...\n> > > > >\n> > > > > I don't understand why we hate ERRORs from fdw-2pc-commit routine so\n> > > > > much. I think remote-commits should be performed before local commit\n> > > > > passes the point-of-no-return and the v26-0002 actually places\n> > > > > AtEOXact_FdwXact() before the critical section.\n> > > > >\n> > > >\n> > > > So you're thinking the following sequence?\n> > > >\n> > > > 1. Prepare all foreign transactions.\n> > > > 2. Commit the all prepared foreign transactions.\n> > > > 3. Commit the local transaction.\n> > > >\n> > > > Suppose we have the backend process call the commit routine, what if\n> > > > one of FDW raises an ERROR during committing the foreign transaction\n> > > > after committing other foreign transactions? The transaction will end\n> > > > up with an abort but some foreign transactions are already committed.\n> > >\n> > > Ok, I understand what you are aiming.\n> > >\n> > > It is apparently out of the focus of the two-phase commit\n> > > protocol. Each FDW server can try to keep the contract as far as its\n> > > ability reaches, but in the end such kind of failure is\n> > > inevitable. Even if we require FDW developers not to respond until a\n> > > 2pc-commit succeeds, that just leads the whole FDW-cluster to freeze\n> > > even not in an extremely bad case.\n> > >\n> > > We have no other choices than shutting the server down (then the\n> > > succeeding server start removes the garbage commits) or continueing\n> > > working leaving some information in a system storage (or reverting the\n> > > garbage commits). What we can do in that case is to provide a\n> > > automated way to resolve the inconsistency.\n> > >\n> > > > Also, what if the backend process failed to commit the local\n> > > > transaction? Since it already committed all foreign transactions it\n> > > > cannot ensure the global atomicity in this case too. Therefore, I\n> > > > think we should commit the distributed transactions in the following\n> > > > sequence:\n> > >\n> > > Ditto. It's out of the range of 2pc. Using p2c for local transaction\n> > > could reduce that kind of failure but I'm not sure. 3pc, 4pc ...npc\n> > > could reduce the probability but can't elimite failure cases.\n> >\n> > IMO the problems I mentioned arise from the fact that the above\n> > sequence doesn't really follow the 2pc protocol in the first place.\n> >\n> > We can think of the fact that we commit the local transaction without\n> > preparation while preparing foreign transactions as that we’re using\n> > the 2pc with last resource transaction optimization (or last agent\n> > optimization)[1]. That is, we prepare all foreign transactions first\n> > and the local node is always the last resource to process. At this\n> > time, the outcome of the distributed transaction completely depends on\n> > the fate of the last resource (i.g., the local transaction). If it\n> > fails, the distributed transaction must be abort by rolling back\n> > prepared foreign transactions. OTOH, if it succeeds, all prepared\n> > foreign transaction must be committed. Therefore, we don’t need to\n> > prepare the last resource and can commit it. In this way, if we want\n>\n> There are cases of commit-failure of a local transaction caused by\n> too-many notifications or by serialization failure.\n\nYes, even if that happens we are still able to rollback all foreign\ntransactions.\n\n>\n> > to commit the local transaction without preparation, the local\n> > transaction must be committed at last. But since the above sequence\n> > doesn’t follow this protocol, we will have such problems. I think if\n> > we follow the 2pc properly, such basic failures don't happen.\n>\n> True. But I haven't suggested that sequence.\n\nOkay, I might have missed your point. Could you elaborate on the idea\nyou mentioned before, \"I think remote-commits should be performed\nbefore local commit passes the point-of-no-return\"?\n\n>\n> > > > 1. Prepare all foreign transactions.\n> > > > 2. Commit the local transaction.\n> > > > 3. Commit the all prepared foreign transactions.\n> > > >\n> > > > But this is still not a perfect solution. If we have the backend\n> > >\n> > > 2pc is not a perfect solution in the first place. Attaching a similar\n> > > phase to it cannot make it \"perfect\".\n> > >\n> > > > process call the commit routine and an error happens during executing\n> > > > the commit routine of an FDW (i.g., at step 3) it's too late to report\n> > > > an error to the client because we already committed the local\n> > > > transaction. So the current solution is to have a background process\n> > > > commit the foreign transactions so that the backend can just wait\n> > > > without the possibility of errors.\n> > >\n> > > Whatever process tries to complete a transaction, the client must wait\n> > > for the transaction to end and anyway that's just a freeze in the\n> > > client's view, unless you intended to respond to local commit before\n> > > all participant complete.\n> >\n> > Yes, but the point of using a separate process is that even if FDW\n> > code raises an error, the client wanting for transaction resolution\n> > doesn't get it and it's interruptible.\n> >\n> > [1] https://docs.oracle.com/cd/E13222_01/wls/docs91/jta/llr.html\n>\n> I don't get the point. If FDW-commit is called on the same process, an\n> error from FDW-commit outright leads to the failure of the current\n> commit. Isn't \"the client wanting for transaction resolution\" the\n> client of the leader process of the 2pc-commit in the same-process\n> model?\n>\n> I should missing something, but postgres_fdw allows query cancelation\n> at commit time. (But I think it is depends on timing whether the\n> remote commit is completed or aborted.). Perhaps the feature was\n> introduced after the project started?\n>\n> > commit ae9bfc5d65123aaa0d1cca9988037489760bdeae\n> > Author: Robert Haas <rhaas@postgresql.org>\n> > Date: Wed Jun 7 15:14:55 2017 -0400\n> >\n> > postgres_fdw: Allow cancellation of transaction control commands.\n>\n> I thought that we are discussing on fdw-errors during the 2pc-commit\n> phase.\n>\n\nYes, I'm also discussing on fdw-errors during the 2pc-commit phase\nthat happens after committing the local transaction.\n\nEven if FDW-commit raises an error due to the user's cancel request or\nwhatever reason during committing the prepared foreign transactions,\nit's too late. The client will get an error like \"ERROR: canceling\nstatement due to user request\" and would think the transaction is\naborted but it's not true, the local transaction is already committed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Oct 2020 12:09:34 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Wed, 14 Oct 2020 12:09:34 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \r\n> On Wed, 14 Oct 2020 at 10:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrot> > There are cases of commit-failure of a local transaction caused by\r\n> > too-many notifications or by serialization failure.\r\n> \r\n> Yes, even if that happens we are still able to rollback all foreign\r\n> transactions.\r\n\r\nMmm. I'm confused. If this is about 2pc-commit-request(or prepare)\r\nphase, we can rollback the remote transactions. But I think we're\r\nfocusing 2pc-commit phase. remote transaction that has already\r\n2pc-committed, they can be no longer rollback'ed.\r\n\r\n> > > to commit the local transaction without preparation, the local\r\n> > > transaction must be committed at last. But since the above sequence\r\n> > > doesn’t follow this protocol, we will have such problems. I think if\r\n> > > we follow the 2pc properly, such basic failures don't happen.\r\n> >\r\n> > True. But I haven't suggested that sequence.\r\n> \r\n> Okay, I might have missed your point. Could you elaborate on the idea\r\n> you mentioned before, \"I think remote-commits should be performed\r\n> before local commit passes the point-of-no-return\"?\r\n\r\nIt is simply the condition that we can ERROR-out from\r\nCommitTransaction. I thought that when you say like \"we cannot\r\nERROR-out\" you meant \"since that is raised to FATAL\", but it seems to\r\nme that both of you are looking another aspect.\r\n\r\nIf the aspect is \"what to do complete the all-prepared p2c transaction\r\nat all costs\", I'd say \"there's a fundamental limitaion\". Although\r\nI'm not sure what you mean exactly by prohibiting errors from fdw\r\nroutines , if that meant \"the API can fail, but must not raise an\r\nexception\", that policy is enforced by setting a critical\r\nsection. However, if it were \"the API mustn't fail\", that cannot be\r\nrealized, I believe.\r\n\r\n> > I thought that we are discussing on fdw-errors during the 2pc-commit\r\n> > phase.\r\n> >\r\n> \r\n> Yes, I'm also discussing on fdw-errors during the 2pc-commit phase\r\n> that happens after committing the local transaction.\r\n> \r\n> Even if FDW-commit raises an error due to the user's cancel request or\r\n> whatever reason during committing the prepared foreign transactions,\r\n> it's too late. The client will get an error like \"ERROR: canceling\r\n> statement due to user request\" and would think the transaction is\r\n> aborted but it's not true, the local transaction is already committed.\r\n\r\nBy the way I found that I misread the patch. in v26-0002,\r\nAtEOXact_FdwXact() is actually called after the\r\npoint-of-no-return. What is the reason for the place? We can\r\nerror-out before changing the state to TRANS_COMMIT.\r\n\r\nAnd if any of the remotes ended with 2pc-commit (not prepare phase)\r\nfailure, consistency of the commit is no longer guaranteed so we have\r\nno choice other than shutting down the server, or continuing running\r\nallowing the incosistency. What do we want in that case?\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 14 Oct 2020 13:19:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Wed, 14 Oct 2020 at 13:19, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 14 Oct 2020 12:09:34 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > On Wed, 14 Oct 2020 at 10:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrot> > There are cases of commit-failure of a local transaction caused by\n> > > too-many notifications or by serialization failure.\n> >\n> > Yes, even if that happens we are still able to rollback all foreign\n> > transactions.\n>\n> Mmm. I'm confused. If this is about 2pc-commit-request(or prepare)\n> phase, we can rollback the remote transactions. But I think we're\n> focusing 2pc-commit phase. remote transaction that has already\n> 2pc-committed, they can be no longer rollback'ed.\n\nDid you mention a failure of local commit, right? With the current\napproach, we prepare all foreign transactions first and then commit\nthe local transaction. After committing the local transaction we\ncommit the prepared foreign transactions. So suppose a serialization\nfailure happens during committing the local transaction, we still are\nable to roll back foreign transactions. The check of serialization\nfailure of the foreign transactions has already been done at the\nprepare phase.\n\n>\n> > > > to commit the local transaction without preparation, the local\n> > > > transaction must be committed at last. But since the above sequence\n> > > > doesn’t follow this protocol, we will have such problems. I think if\n> > > > we follow the 2pc properly, such basic failures don't happen.\n> > >\n> > > True. But I haven't suggested that sequence.\n> >\n> > Okay, I might have missed your point. Could you elaborate on the idea\n> > you mentioned before, \"I think remote-commits should be performed\n> > before local commit passes the point-of-no-return\"?\n>\n> It is simply the condition that we can ERROR-out from\n> CommitTransaction. I thought that when you say like \"we cannot\n> ERROR-out\" you meant \"since that is raised to FATAL\", but it seems to\n> me that both of you are looking another aspect.\n>\n> If the aspect is \"what to do complete the all-prepared p2c transaction\n> at all costs\", I'd say \"there's a fundamental limitaion\". Although\n> I'm not sure what you mean exactly by prohibiting errors from fdw\n> routines , if that meant \"the API can fail, but must not raise an\n> exception\", that policy is enforced by setting a critical\n> section. However, if it were \"the API mustn't fail\", that cannot be\n> realized, I believe.\n\nWhen I say \"we cannot error-out\" it means it's too late. What I'd like\nto prevent is that the backend process returns an error to the client\nafter committing the local transaction. Because it will mislead the\nuser.\n\n>\n> > > I thought that we are discussing on fdw-errors during the 2pc-commit\n> > > phase.\n> > >\n> >\n> > Yes, I'm also discussing on fdw-errors during the 2pc-commit phase\n> > that happens after committing the local transaction.\n> >\n> > Even if FDW-commit raises an error due to the user's cancel request or\n> > whatever reason during committing the prepared foreign transactions,\n> > it's too late. The client will get an error like \"ERROR: canceling\n> > statement due to user request\" and would think the transaction is\n> > aborted but it's not true, the local transaction is already committed.\n>\n> By the way I found that I misread the patch. in v26-0002,\n> AtEOXact_FdwXact() is actually called after the\n> point-of-no-return. What is the reason for the place? We can\n> error-out before changing the state to TRANS_COMMIT.\n>\n\nAre you referring to\nv26-0002-Introduce-transaction-manager-for-foreign-transa.patch? If\nso, the patch doesn't implement 2pc. I think we can commit the foreign\ntransaction before changing the state to TRANS_COMMIT but in any case\nit cannot ensure atomic commit. It just adds both commit and rollback\ntransaction APIs so that FDW can control transactions by using these\nAPI, not by XactCallback.\n\n> And if any of the remotes ended with 2pc-commit (not prepare phase)\n> failure, consistency of the commit is no longer guaranteed so we have\n> no choice other than shutting down the server, or continuing running\n> allowing the incosistency. What do we want in that case?\n\nI think it depends on the failure. If 2pc-commit failed due to network\nconnection failure or the server crash, we would need to try again\nlater. We normally expect the prepared transaction is able to be\ncommitted with no issue but in case it could not, I think we can leave\nthe choice for the user: resolve it manually after recovered, give up\netc.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Oct 2020 13:52:49 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "(v26 fails on the current master)\r\n\r\nAt Wed, 14 Oct 2020 13:52:49 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \r\n> On Wed, 14 Oct 2020 at 13:19, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Wed, 14 Oct 2020 12:09:34 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\r\n> > > On Wed, 14 Oct 2020 at 10:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrot> > There are cases of commit-failure of a local transaction caused by\r\n> > > > too-many notifications or by serialization failure.\r\n> > >\r\n> > > Yes, even if that happens we are still able to rollback all foreign\r\n> > > transactions.\r\n> >\r\n> > Mmm. I'm confused. If this is about 2pc-commit-request(or prepare)\r\n> > phase, we can rollback the remote transactions. But I think we're\r\n> > focusing 2pc-commit phase. remote transaction that has already\r\n> > 2pc-committed, they can be no longer rollback'ed.\r\n> \r\n> Did you mention a failure of local commit, right? With the current\r\n> approach, we prepare all foreign transactions first and then commit\r\n> the local transaction. After committing the local transaction we\r\n> commit the prepared foreign transactions. So suppose a serialization\r\n> failure happens during committing the local transaction, we still are\r\n> able to roll back foreign transactions. The check of serialization\r\n> failure of the foreign transactions has already been done at the\r\n> prepare phase.\r\n\r\nUnderstood.\r\n\r\n> > > > > to commit the local transaction without preparation, the local\r\n> > > > > transaction must be committed at last. But since the above sequence\r\n> > > > > doesn’t follow this protocol, we will have such problems. I think if\r\n> > > > > we follow the 2pc properly, such basic failures don't happen.\r\n> > > >\r\n> > > > True. But I haven't suggested that sequence.\r\n> > >\r\n> > > Okay, I might have missed your point. Could you elaborate on the idea\r\n> > > you mentioned before, \"I think remote-commits should be performed\r\n> > > before local commit passes the point-of-no-return\"?\r\n> >\r\n> > It is simply the condition that we can ERROR-out from\r\n> > CommitTransaction. I thought that when you say like \"we cannot\r\n> > ERROR-out\" you meant \"since that is raised to FATAL\", but it seems to\r\n> > me that both of you are looking another aspect.\r\n> >\r\n> > If the aspect is \"what to do complete the all-prepared p2c transaction\r\n> > at all costs\", I'd say \"there's a fundamental limitaion\". Although\r\n> > I'm not sure what you mean exactly by prohibiting errors from fdw\r\n> > routines , if that meant \"the API can fail, but must not raise an\r\n> > exception\", that policy is enforced by setting a critical\r\n> > section. However, if it were \"the API mustn't fail\", that cannot be\r\n> > realized, I believe.\r\n> \r\n> When I say \"we cannot error-out\" it means it's too late. What I'd like\r\n> to prevent is that the backend process returns an error to the client\r\n> after committing the local transaction. Because it will mislead the\r\n> user.\r\n\r\nAnyway we don't do anything that can fail after changing state to\r\nTRANS_COMMIT. So we cannot run fdw-2pc-commit after that since it\r\ncannot be failure-proof. if we do them before the point we cannot\r\nERROR-out after local commit completes.\r\n\r\n> > > > I thought that we are discussing on fdw-errors during the 2pc-commit\r\n> > > > phase.\r\n> > > >\r\n> > >\r\n> > > Yes, I'm also discussing on fdw-errors during the 2pc-commit phase\r\n> > > that happens after committing the local transaction.\r\n> > >\r\n> > > Even if FDW-commit raises an error due to the user's cancel request or\r\n> > > whatever reason during committing the prepared foreign transactions,\r\n> > > it's too late. The client will get an error like \"ERROR: canceling\r\n> > > statement due to user request\" and would think the transaction is\r\n> > > aborted but it's not true, the local transaction is already committed.\r\n> >\r\n> > By the way I found that I misread the patch. in v26-0002,\r\n> > AtEOXact_FdwXact() is actually called after the\r\n> > point-of-no-return. What is the reason for the place? We can\r\n> > error-out before changing the state to TRANS_COMMIT.\r\n> >\r\n> \r\n> Are you referring to\r\n> v26-0002-Introduce-transaction-manager-for-foreign-transa.patch? If\r\n> so, the patch doesn't implement 2pc. I think we can commit the foreign\r\n\r\nAh, I guessed that the trigger points of PREPARE and COMMIT that are\r\ninserted by 0002 won't be moved by the following patches. So the\r\ndirection of my discussion doesn't change by the fact.\r\n\r\n> transaction before changing the state to TRANS_COMMIT but in any case\r\n> it cannot ensure atomic commit. It just adds both commit and rollback\r\n\r\nI guess that you have the local-commit-failure case in mind? Couldn't\r\nwe internally prepare the local transaction then following the correct\r\np2c protocol involving the local transaction? (I'm looking v26-0008)\r\n\r\n> transaction APIs so that FDW can control transactions by using these\r\n> API, not by XactCallback.\r\n\r\n> > And if any of the remotes ended with 2pc-commit (not prepare phase)\r\n> > failure, consistency of the commit is no longer guaranteed so we have\r\n> > no choice other than shutting down the server, or continuing running\r\n> > allowing the incosistency. What do we want in that case?\r\n> \r\n> I think it depends on the failure. If 2pc-commit failed due to network\r\n> connection failure or the server crash, we would need to try again\r\n> later. We normally expect the prepared transaction is able to be\r\n> committed with no issue but in case it could not, I think we can leave\r\n> the choice for the user: resolve it manually after recovered, give up\r\n> etc.\r\n\r\nUnderstood.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 14 Oct 2020 17:10:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "On Wed, 14 Oct 2020 at 17:11, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> (v26 fails on the current master)\n\nThanks, I'll update the patch.\n\n>\n> At Wed, 14 Oct 2020 13:52:49 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > On Wed, 14 Oct 2020 at 13:19, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 14 Oct 2020 12:09:34 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > > On Wed, 14 Oct 2020 at 10:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrot> > There are cases of commit-failure of a local transaction caused by\n> > > > > too-many notifications or by serialization failure.\n> > > >\n> > > > Yes, even if that happens we are still able to rollback all foreign\n> > > > transactions.\n> > >\n> > > Mmm. I'm confused. If this is about 2pc-commit-request(or prepare)\n> > > phase, we can rollback the remote transactions. But I think we're\n> > > focusing 2pc-commit phase. remote transaction that has already\n> > > 2pc-committed, they can be no longer rollback'ed.\n> >\n> > Did you mention a failure of local commit, right? With the current\n> > approach, we prepare all foreign transactions first and then commit\n> > the local transaction. After committing the local transaction we\n> > commit the prepared foreign transactions. So suppose a serialization\n> > failure happens during committing the local transaction, we still are\n> > able to roll back foreign transactions. The check of serialization\n> > failure of the foreign transactions has already been done at the\n> > prepare phase.\n>\n> Understood.\n>\n> > > > > > to commit the local transaction without preparation, the local\n> > > > > > transaction must be committed at last. But since the above sequence\n> > > > > > doesn’t follow this protocol, we will have such problems. I think if\n> > > > > > we follow the 2pc properly, such basic failures don't happen.\n> > > > >\n> > > > > True. But I haven't suggested that sequence.\n> > > >\n> > > > Okay, I might have missed your point. Could you elaborate on the idea\n> > > > you mentioned before, \"I think remote-commits should be performed\n> > > > before local commit passes the point-of-no-return\"?\n> > >\n> > > It is simply the condition that we can ERROR-out from\n> > > CommitTransaction. I thought that when you say like \"we cannot\n> > > ERROR-out\" you meant \"since that is raised to FATAL\", but it seems to\n> > > me that both of you are looking another aspect.\n> > >\n> > > If the aspect is \"what to do complete the all-prepared p2c transaction\n> > > at all costs\", I'd say \"there's a fundamental limitaion\". Although\n> > > I'm not sure what you mean exactly by prohibiting errors from fdw\n> > > routines , if that meant \"the API can fail, but must not raise an\n> > > exception\", that policy is enforced by setting a critical\n> > > section. However, if it were \"the API mustn't fail\", that cannot be\n> > > realized, I believe.\n> >\n> > When I say \"we cannot error-out\" it means it's too late. What I'd like\n> > to prevent is that the backend process returns an error to the client\n> > after committing the local transaction. Because it will mislead the\n> > user.\n>\n> Anyway we don't do anything that can fail after changing state to\n> TRANS_COMMIT. So we cannot run fdw-2pc-commit after that since it\n> cannot be failure-proof. if we do them before the point we cannot\n> ERROR-out after local commit completes.\n>\n> > > > > I thought that we are discussing on fdw-errors during the 2pc-commit\n> > > > > phase.\n> > > > >\n> > > >\n> > > > Yes, I'm also discussing on fdw-errors during the 2pc-commit phase\n> > > > that happens after committing the local transaction.\n> > > >\n> > > > Even if FDW-commit raises an error due to the user's cancel request or\n> > > > whatever reason during committing the prepared foreign transactions,\n> > > > it's too late. The client will get an error like \"ERROR: canceling\n> > > > statement due to user request\" and would think the transaction is\n> > > > aborted but it's not true, the local transaction is already committed.\n> > >\n> > > By the way I found that I misread the patch. in v26-0002,\n> > > AtEOXact_FdwXact() is actually called after the\n> > > point-of-no-return. What is the reason for the place? We can\n> > > error-out before changing the state to TRANS_COMMIT.\n> > >\n> >\n> > Are you referring to\n> > v26-0002-Introduce-transaction-manager-for-foreign-transa.patch? If\n> > so, the patch doesn't implement 2pc. I think we can commit the foreign\n>\n> Ah, I guessed that the trigger points of PREPARE and COMMIT that are\n> inserted by 0002 won't be moved by the following patches. So the\n> direction of my discussion doesn't change by the fact.\n>\n> > transaction before changing the state to TRANS_COMMIT but in any case\n> > it cannot ensure atomic commit. It just adds both commit and rollback\n>\n> I guess that you have the local-commit-failure case in mind? Couldn't\n> we internally prepare the local transaction then following the correct\n> p2c protocol involving the local transaction? (I'm looking v26-0008)\n\nYes, we could. But as I mentioned before if we always commit the local\ntransaction last, we don't necessarily need to prepare the local\ntransaction. If we prepared the local transaction, I think we would be\nable to allow FDW's commit routine to raise an error even during\n2pc-commit, but only for the first time. Once we committed any one of\nthe involved transactions including the local transaction and foreign\ntransactions, the commit routine must not raise an error during\n2pc-commit for the same reason; it's too late.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Oct 2020 19:11:49 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, 12 Oct 2020 at 17:19, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > I was thinking to have a GUC timeout parameter like statement_timeout.\n> > The backend waits for the setting value when resolving foreign\n> > transactions.\n>\n> Me too.\n>\n>\n> > But this idea seems different. FDW can set its timeout\n> > via a transaction timeout API, is that right?\n>\n> I'm not perfectly sure about how the TM( application server works) , but probably no. The TM has a configuration parameter for transaction timeout, and the TM calls XAResource.setTransactionTimeout() with that or smaller value for the argument.\n>\n>\n> > But even if FDW can set\n> > the timeout using a transaction timeout API, the problem that client\n> > libraries for some DBMS don't support interruptible functions still\n> > remains. The user can set a short time to the timeout but it also\n> > leads to unnecessary timeouts. Thoughts?\n>\n> Unfortunately, I'm afraid we can do nothing about it. If the DBMS's client library doesn't support cancellation (e.g. doesn't respond to Ctrl+C or provide a function that cancel processing in pgorogss), then the Postgres user just finds that he can't cancel queries (just like we experienced with odbc_fdw.)\n\nSo the idea of using another process to commit prepared foreign\ntransactions seems better also in terms of this point. Even if a DBMS\nclient library doesn’t support query cancellation, the transaction\ncommit can return the control to the client when the user press ctl-c\nas the backend process is just sleeping using WaitLatch() (it’s\nsimilar to synchronous replication)\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 19 Oct 2020 14:17:42 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> > Unfortunately, I'm afraid we can do nothing about it. If the DBMS's client\r\n> library doesn't support cancellation (e.g. doesn't respond to Ctrl+C or provide a\r\n> function that cancel processing in pgorogss), then the Postgres user just finds\r\n> that he can't cancel queries (just like we experienced with odbc_fdw.)\r\n> \r\n> So the idea of using another process to commit prepared foreign\r\n> transactions seems better also in terms of this point. Even if a DBMS\r\n> client library doesn’t support query cancellation, the transaction\r\n> commit can return the control to the client when the user press ctl-c\r\n> as the backend process is just sleeping using WaitLatch() (it’s\r\n> similar to synchronous replication)\r\n\r\nI have to say that's nitpicking. I believe almost nobody does, or cares about, canceling commits, at the expense of impractical performance due to non-parallelism, serial execution in each resolver, and context switches.\r\n\r\nAlso, FDW is not cancellable in general. It makes no sense to care only about commit.\r\n\r\n(Fortunately, postgres_fdw is cancellable in any way.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n",
"msg_date": "Mon, 19 Oct 2020 05:38:56 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, 19 Oct 2020 at 14:39, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > Unfortunately, I'm afraid we can do nothing about it. If the DBMS's client\n> > library doesn't support cancellation (e.g. doesn't respond to Ctrl+C or provide a\n> > function that cancel processing in pgorogss), then the Postgres user just finds\n> > that he can't cancel queries (just like we experienced with odbc_fdw.)\n> >\n> > So the idea of using another process to commit prepared foreign\n> > transactions seems better also in terms of this point. Even if a DBMS\n> > client library doesn’t support query cancellation, the transaction\n> > commit can return the control to the client when the user press ctl-c\n> > as the backend process is just sleeping using WaitLatch() (it’s\n> > similar to synchronous replication)\n>\n> I have to say that's nitpicking. I believe almost nobody does, or cares about, canceling commits,\n\nReally? I don’t think so. I think It’s terrible that the query gets\nstuck for a long time and we cannot do anything than waiting until a\ncrashed foreign server is restored. We can have a timeout but I don’t\nthink every user wants to use the timeout or the user might want to\nset a timeout to a relatively large value by the concern of\nmisdetection. I guess synchronous replication had similar concerns so\nit has a similar mechanism.\n\n> at the expense of impractical performance due to non-parallelism, serial execution in each resolver, and context switches.\n\nI have never said that we’re going to live with serial execution in\neach resolver and non-parallelism. I've been repeatedly saying that it\nwould be possible that we improve this feature over the releases to\nget a good performance even if we use a separate background process.\nUsing a background process to commit is the only option to support\ninterruptible foreign transaction resolution for now whereas there are\nsome ideas for performance improvements. I think we don't have enough\ndiscussion on how we can improve the idea of using a separate process\nand how much performance will improve and how possible it is. It's not\nlate to reject that idea after the discussion.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 19 Oct 2020 16:22:41 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> On Mon, 19 Oct 2020 at 14:39, tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > I have to say that's nitpicking. I believe almost nobody does, or cares about,\r\n> canceling commits,\r\n> \r\n> Really? I don’t think so. I think It’s terrible that the query gets\r\n> stuck for a long time and we cannot do anything than waiting until a\r\n> crashed foreign server is restored. We can have a timeout but I don’t\r\n> think every user wants to use the timeout or the user might want to\r\n> set a timeout to a relatively large value by the concern of\r\n> misdetection. I guess synchronous replication had similar concerns so\r\n> it has a similar mechanism.\r\n\r\nReally. I thought we were talking about canceling commits with Ctrl + C as you referred, right? I couldn't imagine, in production environments where many sessions are running transactions concurrently, how the user (DBA) wants and can cancel each stuck session during commit one by one with Ctrl + C by hand. I haven't seen such a feature exist or been considered crucial that enables the user (administrator) to cancel running processing with Ctrl + C from the side.\r\n\r\nRather, setting appropriate timeout is the current sound system design , isn't it? It spans many areas - TCP/IP, heartbeats of load balancers and clustering software, request and response to application servers and database servers, etc. I sympathize with your concern that users may not be confident about their settings. But that's the current practice unfortunately.\r\n\r\n\r\n> > at the expense of impractical performance due to non-parallelism, serial\r\n> execution in each resolver, and context switches.\r\n> \r\n> I have never said that we’re going to live with serial execution in\r\n> each resolver and non-parallelism. I've been repeatedly saying that it\r\n> would be possible that we improve this feature over the releases to\r\n> get a good performance even if we use a separate background process.\r\n\r\nIIRC, I haven't seen a reasonable design based on a separate process that handles commits during normal operation. What I heard is to launch as many resolvers as the client sessions, but that consumes too much resource as I said.\r\n\r\n\r\n> Using a background process to commit is the only option to support\r\n> interruptible foreign transaction resolution for now whereas there are\r\n> some ideas for performance improvements.\r\n\r\nA practical solution is the timeout for the FDW in general, as in application servers. postgres_fdw can benefit from Ctrl + C as well.\r\n\r\n\r\n> I think we don't have enough\r\n> discussion on how we can improve the idea of using a separate process\r\n> and how much performance will improve and how possible it is. It's not\r\n> late to reject that idea after the discussion.\r\n\r\nYeah, I agree that discussion is not enough yet. In other words, the design has not reached the quality for the first release yet. We should try to avoid using \"Hopefully, we should be able to improve in the next release (I haven't seen the design in light, though)\" as an excuse for getting a half-baked patch committed that does not offer practical quality. I saw many developers' patches were rejected because of insufficient performance, e.g. even 0.8% performance impact. (I'm one of those developers, actually...) I have been feeling this community is rigorous about performance. We have to be sincere.\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 19 Oct 2020 09:07:17 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, Oct 19, 2020 at 2:37 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> Really. I thought we were talking about canceling commits with Ctrl + C as you referred, right? I couldn't imagine, in production environments where many sessions are running transactions concurrently, how the user (DBA) wants and can cancel each stuck session during commit one by one with Ctrl + C by hand. I haven't seen such a feature exist or been considered crucial that enables the user (administrator) to cancel running processing with Ctrl + C from the side.\n\nUsing pg_cancel_backend() and pg_terminate_backend() a DBA can cancel\nrunning query from any backend or terminate a backend. For either to\nwork the backend needs to be interruptible. IIRC, Robert had made an\neffort to make postgres_fdw interruptible few years back.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 19 Oct 2020 17:07:24 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, 19 Oct 2020 at 20:37, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Mon, Oct 19, 2020 at 2:37 PM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > Really. I thought we were talking about canceling commits with Ctrl + C as you referred, right? I couldn't imagine, in production environments where many sessions are running transactions concurrently, how the user (DBA) wants and can cancel each stuck session during commit one by one with Ctrl + C by hand. I haven't seen such a feature exist or been considered crucial that enables the user (administrator) to cancel running processing with Ctrl + C from the side.\n>\n> Using pg_cancel_backend() and pg_terminate_backend() a DBA can cancel\n> running query from any backend or terminate a backend. For either to\n> work the backend needs to be interruptible. IIRC, Robert had made an\n> effort to make postgres_fdw interruptible few years back.\n\nRight. Also, We discussed having a timeout on the core side but I'm\nconcerned that the timeout also might not work if it's not\ninterruptible.\n\nWhile using the timeout is a good idea, I have to think there is also\na certain number of the user who doesn't use this timeout as there is\na certain number of the users who doesn't use timeouts such as\nstatement_timeout. We must not ignore such users and It might not be\nadvisable to design a feature that ignores such users.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 19 Oct 2020 21:20:09 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\r\n> Using pg_cancel_backend() and pg_terminate_backend() a DBA can cancel\r\n> running query from any backend or terminate a backend. For either to\r\n> work the backend needs to be interruptible. IIRC, Robert had made an\r\n> effort to make postgres_fdw interruptible few years back.\r\n\r\nYeah, I know those functions. Sawada-san was talking about Ctrl + C, so I responded accordingly.\r\n\r\nAlso, how can the DBA find sessions to run those functions against? Can he tell if a session is connected to or running SQL to a given foreign server? Can he terminate or cancel all session with one SQL command that are stuck in accessing a particular foreign server?\r\n\r\nFurthermore, FDW is not cancellable in general. So, I don't see a point in trying hard to make only commit be cancelable.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 20 Oct 2020 02:44:09 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Tue, 20 Oct 2020 02:44:09 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> > Using pg_cancel_backend() and pg_terminate_backend() a DBA can cancel\n> > running query from any backend or terminate a backend. For either to\n> > work the backend needs to be interruptible. IIRC, Robert had made an\n> > effort to make postgres_fdw interruptible few years back.\n> \n> Yeah, I know those functions. Sawada-san was talking about Ctrl + C, so I responded accordingly.\n> \n> Also, how can the DBA find sessions to run those functions against? Can he tell if a session is connected to or running SQL to a given foreign server? Can he terminate or cancel all session with one SQL command that are stuck in accessing a particular foreign server?\n\nI don't think the inability to cancel all session at once cannot be a\nreason not to not to allow operators to cancel a stuck session.\n\n> Furthermore, FDW is not cancellable in general. So, I don't see a point in trying hard to make only commit be cancelable.\n\nI think that it is quite important that operators can cancel any\nprocess that has been stuck for a long time. Furthermore, postgres_fdw\nis more likely to be stuck since network is involved so the usefulness\nof that feature would be higher.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 20 Oct 2020 12:07:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> I don't think the inability to cancel all session at once cannot be a\n> reason not to not to allow operators to cancel a stuck session.\n\nYeah, I didn't mean to discount the ability to cancel queries. I just want to confirm how the user can use the cancellation in practice. I didn't see how the user can use the cancellation in the FDW framework, so I asked about it. We have to think about the user's context if we regard canceling commits as important.\n\n\n> > Furthermore, FDW is not cancellable in general. So, I don't see a point in\n> trying hard to make only commit be cancelable.\n> \n> I think that it is quite important that operators can cancel any\n> process that has been stuck for a long time. Furthermore, postgres_fdw\n> is more likely to be stuck since network is involved so the usefulness\n> of that feature would be higher.\n\nBut lower than practical performance during normal operation.\n\nBTW, speaking of network, how can postgres_fdw respond quickly to cancel request when libpq is waiting for a reply from a down foreign server? Can the user continue to use that session after cancellation?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 20 Oct 2020 04:23:12 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 20 Oct 2020 at 13:23, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > I don't think the inability to cancel all session at once cannot be a\n> > reason not to not to allow operators to cancel a stuck session.\n>\n> Yeah, I didn't mean to discount the ability to cancel queries. I just want to confirm how the user can use the cancellation in practice. I didn't see how the user can use the cancellation in the FDW framework, so I asked about it. We have to think about the user's context if we regard canceling commits as important.\n>\n\nI think it doesn't matter whether in FDW framework or not. The user\nnormally doesn't care which backend processes connecting to foreign\nservers. They will attempt to cancel the query like always if they\nrealized that a backend gets stuck. There are surely plenty of users\nwho use query cancellation.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 20 Oct 2020 15:53:29 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Tue, 20 Oct 2020 15:53:29 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Tue, 20 Oct 2020 at 13:23, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > I don't think the inability to cancel all session at once cannot be a\n> > > reason not to not to allow operators to cancel a stuck session.\n> >\n> > Yeah, I didn't mean to discount the ability to cancel queries. I just want to confirm how the user can use the cancellation in practice. I didn't see how the user can use the cancellation in the FDW framework, so I asked about it. We have to think about the user's context if we regard canceling commits as important.\n> >\n> \n> I think it doesn't matter whether in FDW framework or not. The user\n> normally doesn't care which backend processes connecting to foreign\n> servers. They will attempt to cancel the query like always if they\n> realized that a backend gets stuck. There are surely plenty of users\n> who use query cancellation.\n\nThe most serious impact from inability of canceling a query on a\ncertain session is that server-restart is required to end such a\nsession.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 20 Oct 2020 16:16:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "At Tue, 20 Oct 2020 04:23:12 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > Furthermore, FDW is not cancellable in general. So, I don't see a point in\n> > trying hard to make only commit be cancelable.\n> > \n> > I think that it is quite important that operators can cancel any\n> > process that has been stuck for a long time. Furthermore, postgres_fdw\n> > is more likely to be stuck since network is involved so the usefulness\n> > of that feature would be higher.\n> \n> But lower than practical performance during normal operation.\n> \n> BTW, speaking of network, how can postgres_fdw respond quickly to cancel request when libpq is waiting for a reply from a down foreign server? Can the user continue to use that session after cancellation?\n\nIt seems to respond to a statement-cancel signal immediately while\nwaiting for a coming byte. However, seems to wait forever while\nwaiting a space in send-buffer. (Is that mean the session will be\nstuck if it sends a large chunk of bytes while the network is down?)\n\nAfter receiving a signal, it closes the problem connection. So the\nlocal session is usable after that but the fiailed remote sessions are\nclosed and created another one at the next use.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 20 Oct 2020 16:29:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> At Tue, 20 Oct 2020 15:53:29 +0900, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote in\n> > I think it doesn't matter whether in FDW framework or not. The user\n> > normally doesn't care which backend processes connecting to foreign\n> > servers. They will attempt to cancel the query like always if they\n> > realized that a backend gets stuck. There are surely plenty of users\n> > who use query cancellation.\n> \n> The most serious impact from inability of canceling a query on a\n> certain session is that server-restart is required to end such a\n> session.\n\nOK, as I may be repeating, I didn't deny the need for cancellation. Let''s organize the argument.\n\n* FDW in general\nMy understanding is that the FDW feature does not stipulate anything about cancellation. In fact, odbc_fdw was uncancelable. What do we do about this?\n\n* postgres_fdw\nFortunately, it is (should be?) cancelable whatever method we choose for 2PC. So no problem.\nBut is it really cancellable now? What if the libpq call is waiting for response when the foreign server or network is down?\n\n\"Inability to cancel requires database server restart\" feels a bit exaggerating, as libpq has tcp_keepalive* and tcp_user_timeout connection parameters, and even without setting them, TCP timeout works.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 20 Oct 2020 07:53:58 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> It seems to respond to a statement-cancel signal immediately while\n> waiting for a coming byte. However, seems to wait forever while\n> waiting a space in send-buffer. (Is that mean the session will be\n> stuck if it sends a large chunk of bytes while the network is down?)\n\nWhat part makes you worried about that? libpq's send processing?\n\nI've just examined pgfdw_cancel_query(), too. As below, it uses a hidden 30 second timeout. After all, postgres_fdw also relies on timeout already.\n\n /*\n * If it takes too long to cancel the query and discard the result, assume\n * the connection is dead.\n */\n endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);\n\n\n> After receiving a signal, it closes the problem connection. So the\n> local session is usable after that but the fiailed remote sessions are\n> closed and created another one at the next use.\n\nI couldn't see that the problematic connection is closed when the cancellation fails... Am I looking at a wrong place?\n\n /*\n * If connection is already unsalvageable, don't touch it\n * further.\n */\n if (entry->changing_xact_state)\n break;\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 20 Oct 2020 08:55:59 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 20 Oct 2020 at 16:54, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > At Tue, 20 Oct 2020 15:53:29 +0900, Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote in\n> > > I think it doesn't matter whether in FDW framework or not. The user\n> > > normally doesn't care which backend processes connecting to foreign\n> > > servers. They will attempt to cancel the query like always if they\n> > > realized that a backend gets stuck. There are surely plenty of users\n> > > who use query cancellation.\n> >\n> > The most serious impact from inability of canceling a query on a\n> > certain session is that server-restart is required to end such a\n> > session.\n>\n> OK, as I may be repeating, I didn't deny the need for cancellation.\n\nSo what's your opinion?\n\n> Let''s organize the argument.\n>\n> * FDW in general\n> My understanding is that the FDW feature does not stipulate anything about cancellation. In fact, odbc_fdw was uncancelable. What do we do about this?\n>\n> * postgres_fdw\n> Fortunately, it is (should be?) cancelable whatever method we choose for 2PC. So no problem.\n> But is it really cancellable now? What if the libpq call is waiting for response when the foreign server or network is down?\n\nI don’t think we need to stipulate the query cancellation. Anyway I\nguess the facts neither that we don’t stipulate anything about query\ncancellation now nor that postgres_fdw might not be cancellable in\nsome situations now are not a reason for not supporting query\ncancellation. If it's a desirable behavior and users want it, we need\nto put an effort to support it as much as possible like we’ve done in\npostgres_fdw. Some FDWs unfortunately might not be able to support it\nonly by their functionality but it would be good if we can achieve\nthat by combination of PostgreSQL and FDW plugins.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 20 Oct 2020 21:11:16 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, 20 Oct 2020 at 17:56, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > It seems to respond to a statement-cancel signal immediately while\n> > waiting for a coming byte. However, seems to wait forever while\n> > waiting a space in send-buffer. (Is that mean the session will be\n> > stuck if it sends a large chunk of bytes while the network is down?)\n>\n> What part makes you worried about that? libpq's send processing?\n>\n> I've just examined pgfdw_cancel_query(), too. As below, it uses a hidden 30 second timeout. After all, postgres_fdw also relies on timeout already.\n\nIt uses the timeout but it's also cancellable before the timeout. See\nwe call CHECK_FOR_INTERRUPTS() in pgfdw_get_cleanup_result().\n\n>\n>\n> > After receiving a signal, it closes the problem connection. So the\n> > local session is usable after that but the fiailed remote sessions are\n> > closed and created another one at the next use.\n>\n> I couldn't see that the problematic connection is closed when the cancellation fails... Am I looking at a wrong place?\n>\n> /*\n> * If connection is already unsalvageable, don't touch it\n> * further.\n> */\n> if (entry->changing_xact_state)\n> break;\n>\n\nI guess Horiguchi-san refereed the following code in pgfdw_xact_callback():\n\n /*\n * If the connection isn't in a good idle state, discard it to\n * recover. Next GetConnection will open a new connection.\n */\n if (PQstatus(entry->conn) != CONNECTION_OK ||\n PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n entry->changing_xact_state)\n {\n elog(DEBUG3, \"discarding connection %p\", entry->conn);\n disconnect_pg_server(entry);\n }\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 20 Oct 2020 21:22:31 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Tue, 20 Oct 2020 21:22:31 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Tue, 20 Oct 2020 at 17:56, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > It seems to respond to a statement-cancel signal immediately while\n> > > waiting for a coming byte. However, seems to wait forever while\n> > > waiting a space in send-buffer. (Is that mean the session will be\n> > > stuck if it sends a large chunk of bytes while the network is down?)\n> >\n> > What part makes you worried about that? libpq's send processing?\n> >\n> > I've just examined pgfdw_cancel_query(), too. As below, it uses a hidden 30 second timeout. After all, postgres_fdw also relies on timeout already.\n> \n> It uses the timeout but it's also cancellable before the timeout. See\n> we call CHECK_FOR_INTERRUPTS() in pgfdw_get_cleanup_result().\n\nYes. And as Sawada-san mentioned it's not a matter if a specific FDW\nmodule accepts cancellation or not. It's sufficient that we have one\nexample. Other FDWs will follow postgres_fdw if needed.\n\n> > > After receiving a signal, it closes the problem connection. So the\n> > > local session is usable after that but the fiailed remote sessions are\n> > > closed and created another one at the next use.\n> >\n> > I couldn't see that the problematic connection is closed when the cancellation fails... Am I looking at a wrong place?\n...\n> \n> I guess Horiguchi-san refereed the following code in pgfdw_xact_callback():\n> \n> /*\n> * If the connection isn't in a good idle state, discard it to\n> * recover. Next GetConnection will open a new connection.\n> */\n> if (PQstatus(entry->conn) != CONNECTION_OK ||\n> PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n> entry->changing_xact_state)\n> {\n> elog(DEBUG3, \"discarding connection %p\", entry->conn);\n> disconnect_pg_server(entry);\n> }\n\nRight. Although it's not directly relevant to this discussion,\nprecisely, that part is not visited just after the remote \"COMMIT\nTRANSACTION\" failed. If that commit fails or is canceled, an exception\nis raised while entry->changing_xact_state = true. Then the function\nis called again within AbortCurrentTransaction() and reaches the above\ncode.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 21 Oct 2020 15:57:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > if (PQstatus(entry->conn) != CONNECTION_OK ||\n> > PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n> > entry->changing_xact_state)\n> > {\n> > elog(DEBUG3, \"discarding connection %p\", entry->conn);\n> > disconnect_pg_server(entry);\n> > }\n> \n> Right. Although it's not directly relevant to this discussion,\n> precisely, that part is not visited just after the remote \"COMMIT\n> TRANSACTION\" failed. If that commit fails or is canceled, an exception\n> is raised while entry->changing_xact_state = true. Then the function\n> is called again within AbortCurrentTransaction() and reaches the above\n> code.\n\nAh, then the connection to the foreign server is closed after failing to cancel the query. Thanks.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 21 Oct 2020 08:21:58 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\n> So what's your opinion?\r\n\r\nMy opinion is simple and has not changed. Let's clarify and refine the design first in the following areas (others may have pointed out something else too, but I don't remember), before going deeper into the code review.\r\n\r\n* FDW interface\r\nNew functions so that other FDWs can really implement. Currently, XA seems to be the only model we can rely on to validate the FDW interface.\r\nWhat FDW function would call what XA function(s)? What should be the arguments for the FEW functions?\r\n\r\n* Performance\r\nParallel prepare and commits on the client backend. The current implementation is untolerable and should not be the first release quality. I proposed the idea.\r\n(If you insist you don't want to anything about this, I have to think you're just rushing for the patch commit. I want to keep Postgres's reputation.)\r\nAs part of this, I'd like to see the 2PC's message flow and disk writes (via email and/or on the following wiki.) That helps evaluate the 2PC performance, because it's hard to figure it out in the code of a large patch set. I'm simply imagining what is typically written in database textbooks and research papers. I'm asking this because I saw some discussion in this thread that some new WAL records are added. I was worried that transactions have to write WAL records other than prepare and commit unlike textbook implementations.\r\n\r\nAtomic Commit of Distributed Transactions\r\nhttps://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\r\n\r\n* Query cancellation\r\nAs you showed, there's no problem with postgres_fdw?\r\nThe cancelability of FDW in general remains a problem, but that can be a separate undertaking.\r\n\r\n* Global visibility\r\nThis is what Amit-san suggested some times -- \"design it before reviewing the current patch.\" I'm a bit optimistic about this and think this FDW 2PC can be implemented separately as a pure enhancement of FDW. But I also understand his concern. If your (our?) aim is to use this FDW 2PC for sharding, we may have to design the combination of 2PC and visibility first.\r\n\r\n\r\n\r\n> I don’t think we need to stipulate the query cancellation. Anyway I\r\n> guess the facts neither that we don’t stipulate anything about query\r\n> cancellation now nor that postgres_fdw might not be cancellable in\r\n> some situations now are not a reason for not supporting query\r\n> cancellation. If it's a desirable behavior and users want it, we need\r\n> to put an effort to support it as much as possible like we’ve done in\r\n> postgres_fdw. Some FDWs unfortunately might not be able to support it\r\n> only by their functionality but it would be good if we can achieve\r\n> that by combination of PostgreSQL and FDW plugins.\r\n\r\nLet me comment on this a bit; this is a bit dangerous idea, I'm afraid. We need to pay attention to the FDW interface and its documentation so that FDW developers can implement what we consider important -- query cancellation in your discussion. \"postgres_fdw is OK, so the interface is good\" can create interfaces that other FDW developers can't use. That's what Tomas Vondra pointed out several years ago.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 21 Oct 2020 09:33:31 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Oct 21, 2020 at 3:03 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > So what's your opinion?\n>\n> * Global visibility\n> This is what Amit-san suggested some times -- \"design it before reviewing the current patch.\" I'm a bit optimistic about this and think this FDW 2PC can be implemented separately as a pure enhancement of FDW. But I also understand his concern. If your (our?) aim is to use this FDW 2PC for sharding,\n>\n\nAs far as I understand that is what the goal is for which this is a\nstep. For example, see the wiki [1]. I understand that wiki is not the\nfinal thing but I have seen other places as well where there is a\nmention of FDW based sharding and I feel this is the reason why many\npeople are trying to improve this area. That is why I suggested having\nan upfront design of global visibility and a deadlock detector along\nwith this work.\n\n\n[1] - https://wiki.postgresql.org/wiki/WIP_PostgreSQL_Sharding\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Oct 2020 15:38:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, 21 Oct 2020 at 18:33, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > So what's your opinion?\n>\n> My opinion is simple and has not changed. Let's clarify and refine the design first in the following areas (others may have pointed out something else too, but I don't remember), before going deeper into the code review.\n>\n> * FDW interface\n> New functions so that other FDWs can really implement. Currently, XA seems to be the only model we can rely on to validate the FDW interface.\n> What FDW function would call what XA function(s)? What should be the arguments for the FEW functions?\n\nI guess since FDW interfaces may be affected by the feature\narchitecture we can discuss later.\n\n> * Performance\n> Parallel prepare and commits on the client backend. The current implementation is untolerable and should not be the first release quality. I proposed the idea.\n> (If you insist you don't want to anything about this, I have to think you're just rushing for the patch commit. I want to keep Postgres's reputation.)\n\nWhat is in your mind regarding the implementation of parallel prepare\nand commit? Given that some FDW plugins don't support asynchronous\nexecution I guess we need to use parallel workers or something. That\nis, the backend process launches parallel workers to\nprepare/commit/rollback foreign transactions in parallel. I don't deny\nthis approach but it'll definitely make the feature complex and needs\nmore codes.\n\nMy point is a small start and keeping simple the first version. Even\nif we need one or more years for this feature, I think that\nintroducing the simple and minimum functionality as the first version\nto the core still has benefits. We will be able to have the\nopportunity to get real feedback from users and to fix bugs in the\nmain infrastructure before making it complex. In this sense, the patch\nhaving the backend return without waits for resolution after the local\ncommit would be a good start as the first version (i.g., up to\napplying v26-0006 patch). Anyway, the architecture should be\nextensible enough for future improvements.\n\nFor the performance improvements, we will be able to support\nasynchronous and/or prepare/commit/rollback. Moreover, having multiple\nresolver processes on one database would also help get better\nthrough-put. For the user who needs much better through-put, the user\nalso can select not to wait for resolution after the local commit,\nlike synchronous_commit = ‘local’ in replication.\n\n> As part of this, I'd like to see the 2PC's message flow and disk writes (via email and/or on the following wiki.) That helps evaluate the 2PC performance, because it's hard to figure it out in the code of a large patch set. I'm simply imagining what is typically written in database textbooks and research papers. I'm asking this because I saw some discussion in this thread that some new WAL records are added. I was worried that transactions have to write WAL records other than prepare and commit unlike textbook implementations.\n>\n> Atomic Commit of Distributed Transactions\n> https://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n\nUnderstood. I'll add an explanation about the message flow and disk\nwrites to the wiki page.\n\nWe need to consider the point of error handling during resolving\nforeign transactions too.\n\n>\n> > I don’t think we need to stipulate the query cancellation. Anyway I\n> > guess the facts neither that we don’t stipulate anything about query\n> > cancellation now nor that postgres_fdw might not be cancellable in\n> > some situations now are not a reason for not supporting query\n> > cancellation. If it's a desirable behavior and users want it, we need\n> > to put an effort to support it as much as possible like we’ve done in\n> > postgres_fdw. Some FDWs unfortunately might not be able to support it\n> > only by their functionality but it would be good if we can achieve\n> > that by combination of PostgreSQL and FDW plugins.\n>\n> Let me comment on this a bit; this is a bit dangerous idea, I'm afraid. We need to pay attention to the FDW interface and its documentation so that FDW developers can implement what we consider important -- query cancellation in your discussion. \"postgres_fdw is OK, so the interface is good\" can create interfaces that other FDW developers can't use. That's what Tomas Vondra pointed out several years ago.\n\nI suspect the story is somewhat different. libpq fortunately supports\nasynchronous execution, but when it comes to canceling the foreign\ntransaction resolution I think basically all FDW plugins are in the\nsame situation at this time. We can choose whether to make it\ncancellable or not. According to the discussion so far, it completely\ndepends on the architecture of this feature. So my point is whether\nit's worth to have this functionality for users and whether users want\nit, not whether postgres_fdw is ok.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 22 Oct 2020 10:39:08 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Oct 22, 2020 at 10:39 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 21 Oct 2020 at 18:33, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > So what's your opinion?\n> >\n> > My opinion is simple and has not changed. Let's clarify and refine the design first in the following areas (others may have pointed out something else too, but I don't remember), before going deeper into the code review.\n> >\n> > * FDW interface\n> > New functions so that other FDWs can really implement. Currently, XA seems to be the only model we can rely on to validate the FDW interface.\n> > What FDW function would call what XA function(s)? What should be the arguments for the FEW functions?\n>\n> I guess since FDW interfaces may be affected by the feature\n> architecture we can discuss later.\n>\n> > * Performance\n> > Parallel prepare and commits on the client backend. The current implementation is untolerable and should not be the first release quality. I proposed the idea.\n> > (If you insist you don't want to anything about this, I have to think you're just rushing for the patch commit. I want to keep Postgres's reputation.)\n>\n> What is in your mind regarding the implementation of parallel prepare\n> and commit? Given that some FDW plugins don't support asynchronous\n> execution I guess we need to use parallel workers or something. That\n> is, the backend process launches parallel workers to\n> prepare/commit/rollback foreign transactions in parallel. I don't deny\n> this approach but it'll definitely make the feature complex and needs\n> more codes.\n>\n> My point is a small start and keeping simple the first version. Even\n> if we need one or more years for this feature, I think that\n> introducing the simple and minimum functionality as the first version\n> to the core still has benefits. We will be able to have the\n> opportunity to get real feedback from users and to fix bugs in the\n> main infrastructure before making it complex. In this sense, the patch\n> having the backend return without waits for resolution after the local\n> commit would be a good start as the first version (i.g., up to\n> applying v26-0006 patch). Anyway, the architecture should be\n> extensible enough for future improvements.\n>\n> For the performance improvements, we will be able to support\n> asynchronous and/or prepare/commit/rollback. Moreover, having multiple\n> resolver processes on one database would also help get better\n> through-put. For the user who needs much better through-put, the user\n> also can select not to wait for resolution after the local commit,\n> like synchronous_commit = ‘local’ in replication.\n>\n> > As part of this, I'd like to see the 2PC's message flow and disk writes (via email and/or on the following wiki.) That helps evaluate the 2PC performance, because it's hard to figure it out in the code of a large patch set. I'm simply imagining what is typically written in database textbooks and research papers. I'm asking this because I saw some discussion in this thread that some new WAL records are added. I was worried that transactions have to write WAL records other than prepare and commit unlike textbook implementations.\n> >\n> > Atomic Commit of Distributed Transactions\n> > https://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n>\n> Understood. I'll add an explanation about the message flow and disk\n> writes to the wiki page.\n\nDone.\n\n>\n> We need to consider the point of error handling during resolving\n> foreign transactions too.\n>\n> >\n> > > I don’t think we need to stipulate the query cancellation. Anyway I\n> > > guess the facts neither that we don’t stipulate anything about query\n> > > cancellation now nor that postgres_fdw might not be cancellable in\n> > > some situations now are not a reason for not supporting query\n> > > cancellation. If it's a desirable behavior and users want it, we need\n> > > to put an effort to support it as much as possible like we’ve done in\n> > > postgres_fdw. Some FDWs unfortunately might not be able to support it\n> > > only by their functionality but it would be good if we can achieve\n> > > that by combination of PostgreSQL and FDW plugins.\n> >\n> > Let me comment on this a bit; this is a bit dangerous idea, I'm afraid. We need to pay attention to the FDW interface and its documentation so that FDW developers can implement what we consider important -- query cancellation in your discussion. \"postgres_fdw is OK, so the interface is good\" can create interfaces that other FDW developers can't use. That's what Tomas Vondra pointed out several years ago.\n>\n> I suspect the story is somewhat different. libpq fortunately supports\n> asynchronous execution, but when it comes to canceling the foreign\n> transaction resolution I think basically all FDW plugins are in the\n> same situation at this time. We can choose whether to make it\n> cancellable or not. According to the discussion so far, it completely\n> depends on the architecture of this feature. So my point is whether\n> it's worth to have this functionality for users and whether users want\n> it, not whether postgres_fdw is ok.\n>\n\nI've thought again about the idea that once the backend failed to\nresolve a foreign transaction it leaves to a resolver process. With\nthis idea, the backend process perform the 2nd phase of 2PC only once.\nIf an error happens during resolution it leaves to a resolver process\nand returns an error to the client. We used to use this idea in the\nprevious patches and it’s discussed sometimes.\n\nFirst of all, this idea doesn’t resolve the problem of error handling\nthat the transaction could return an error to the client in spite of\nhaving been committed the local transaction. There is an argument that\nthis behavior could also happen even in a single server environment\nbut I guess the situation is slightly different. Basically what the\ntransaction does after the commit is cleanup. An error could happen\nduring cleanup but if it happens it’s likely due to a bug of\nsomething wrong inside PostgreSQL or OS. On the other hand, during and\nafter resolution the transaction does major works such as connecting a\nforeign server, sending an SQL, getting the result, and writing a WAL\nto remove the entry. These are more likely to happen an error.\n\nAlso, with this idea, the client needs to check if the error got from\nthe server is really true because the local transaction might have\nbeen committed. Although this could happen even in a single server\nenvironment how many users check that in practice? If a server\ncrashes, subsequent transactions end up failing due to a network\nconnection error but it seems hard to distinguish between such a real\nerror and the fake error.\n\nMoreover, it’s questionable in terms of extensibility. We would not\nable to support keeping waiting for distributed transactions to\ncomplete even if an error happens, like synchronous replication. The\nuser might want to wait in case where the failure is temporary such as\ntemporary network disconnection. Trying resolution only once seems to\nhave cons of both asynchronous and synchronous resolutions.\n\nSo I’m thinking that with this idea the user will need to change their\napplication so that it checks if the error they got is really true,\nwhich is cumbersome for users. Also, it seems to me we need to\ncircumspectly discuss whether this idea could weaken extensibility.\n\n\nAnyway, according to the discussion, it seems to me that we got a\nconsensus so far that the backend process prepares all foreign\ntransactions and a resolver process is necessary to resolve in-doubt\ntransaction in background. So I’ve changed the patch set as follows.\nApplying these all patches, we can support asynchronous foreign\ntransaction resolution. That is, at transaction commit the backend\nprocess prepares all foreign transactions, and then commit the local\ntransaction. After that, it returns OK of commit to the client while\nleaving the prepared foreign transaction to a resolver process. A\nresolver process fetches the foreign transactions to resolve and\nresolves them in background. Since the 2nd phase of 2PC is performed\nasynchronously a transaction that wants to see the previous\ntransaction result needs to check its status.\n\nHere is brief explaination for each patches:\n\nv27-0001-Introduce-transaction-manager-for-foreign-transa.patch\n\nThis commit adds the basic foreign transaction manager,\nCommitForeignTransaction, and RollbackForeignTransaction API. These\nAPIs support only one-phase. With this change, FDW is able to control\nits transaction using the foreign transaction manager, not using\nXactCallback.\n\nv27-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch\n\nThis commit implements both CommitForeignTransaction and\nRollbackForeignTransaction APIs in postgres_fdw. Note that since\nPREPARE TRANSACTION is still not supported there is nothing the user\nnewly is able to do.\n\nv27-0003-Recreate-RemoveForeignServerById.patch\n\nThis commit recreates RemoveForeignServerById that was removed by\nb1d32d3e3. This is necessary because we need to check if there is a\nforeign transaction involved with the foreign server that is about to\nbe removed.\n\nv27-0004-Add-PrepareForeignTransaction-API.patch\n\nThis commit adds prepared foreign transaction support including WAL\nlogging and recovery, and PrepareForeignTransaction API. With this\nchange, the user is able to do 'PREPARE TRANSACTION’ and\n'COMMIT/ROLLBACK PREPARED' commands on the transaction that involves\nforeign servers. But note that COMMIT/ROLLBACK PREPARED ends only the\nlocal transaction. It doesn't do anything for foreign transactions.\nTherefore, the user needs to resolve foreign transactions manually by\nexecuting the pg_resolve_foreign_xacts() SQL function which is also\nintroduced by this commit.\n\nv27-0005-postgres_fdw-supports-prepare-API.patch\n\nThis commit implements PrepareForeignTransaction API and makes\nCommitForeignTransaction and RollbackForeignTransaction supports\ntwo-phase commit.\n\nv27-0006-Add-GetPrepareId-API.patch\n\nThis commit adds GetPrepareID API.\n\nv27-0007-Introduce-foreign-transaction-launcher-and-resol.patch\n\nThis commit introduces foreign transaction resolver and launcher\nprocesses. With this change, the user doesn’t need to manually execute\npg_resolve_foreign_xacts() function to resolve foreign transactions\nprepared by PREPARE TRANSACTION and left by COMMIT/ROLLBACK PREPARED.\nInstead, a resolver process automatically resolves them in background.\n\nv27-0008-Prepare-foreign-transactions-at-commit-time.patch\n\nWith this commit, the transaction prepares foreign transactions marked\nas modified at transaction commit if foreign_twophase_commit is\n‘required’. Previously the user needs to do PREPARE TRANSACTION and\nCOMMIT/ROLLBACK PREPARED to use 2PC but it enables us to use 2PC\ntransparently to the user. But the transaction returns OK of commit to\nthe client after committing the local transaction and notifying the\nresolver process, without waits. Foreign transactions are\nasynchronously resolved by the resolver process.\n\nv27-0009-postgres_fdw-marks-foreign-transaction-as-modifi.patch\n\nWith this commit, the transactions started via postgres_fdw are marked\nas modified, which is necessary to use 2PC.\n\nv27-0010-Documentation-update.patch\nv27-0011-Add-regression-tests-for-foreign-twophase-commit.patch\n\nDocumentation update and regression tests.\n\nThe missing piece from the previous version patch is synchronously\ntransaction resolution. In the previous patch, foreign transactions\nare synchronously resolved by a resolver process. But since it's under\ndiscussion whether this is a good approach and I'm considering\noptimizing the logic it’s not included in the current patch set.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 5 Nov 2020 12:15:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 12:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 22, 2020 at 10:39 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 21 Oct 2020 at 18:33, tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > >\n> > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > > So what's your opinion?\n> > >\n> > > My opinion is simple and has not changed. Let's clarify and refine the design first in the following areas (others may have pointed out something else too, but I don't remember), before going deeper into the code review.\n> > >\n> > > * FDW interface\n> > > New functions so that other FDWs can really implement. Currently, XA seems to be the only model we can rely on to validate the FDW interface.\n> > > What FDW function would call what XA function(s)? What should be the arguments for the FEW functions?\n> >\n> > I guess since FDW interfaces may be affected by the feature\n> > architecture we can discuss later.\n> >\n> > > * Performance\n> > > Parallel prepare and commits on the client backend. The current implementation is untolerable and should not be the first release quality. I proposed the idea.\n> > > (If you insist you don't want to anything about this, I have to think you're just rushing for the patch commit. I want to keep Postgres's reputation.)\n> >\n> > What is in your mind regarding the implementation of parallel prepare\n> > and commit? Given that some FDW plugins don't support asynchronous\n> > execution I guess we need to use parallel workers or something. That\n> > is, the backend process launches parallel workers to\n> > prepare/commit/rollback foreign transactions in parallel. I don't deny\n> > this approach but it'll definitely make the feature complex and needs\n> > more codes.\n> >\n> > My point is a small start and keeping simple the first version. Even\n> > if we need one or more years for this feature, I think that\n> > introducing the simple and minimum functionality as the first version\n> > to the core still has benefits. We will be able to have the\n> > opportunity to get real feedback from users and to fix bugs in the\n> > main infrastructure before making it complex. In this sense, the patch\n> > having the backend return without waits for resolution after the local\n> > commit would be a good start as the first version (i.g., up to\n> > applying v26-0006 patch). Anyway, the architecture should be\n> > extensible enough for future improvements.\n> >\n> > For the performance improvements, we will be able to support\n> > asynchronous and/or prepare/commit/rollback. Moreover, having multiple\n> > resolver processes on one database would also help get better\n> > through-put. For the user who needs much better through-put, the user\n> > also can select not to wait for resolution after the local commit,\n> > like synchronous_commit = ‘local’ in replication.\n> >\n> > > As part of this, I'd like to see the 2PC's message flow and disk writes (via email and/or on the following wiki.) That helps evaluate the 2PC performance, because it's hard to figure it out in the code of a large patch set. I'm simply imagining what is typically written in database textbooks and research papers. I'm asking this because I saw some discussion in this thread that some new WAL records are added. I was worried that transactions have to write WAL records other than prepare and commit unlike textbook implementations.\n> > >\n> > > Atomic Commit of Distributed Transactions\n> > > https://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n> >\n> > Understood. I'll add an explanation about the message flow and disk\n> > writes to the wiki page.\n>\n> Done.\n>\n> >\n> > We need to consider the point of error handling during resolving\n> > foreign transactions too.\n> >\n> > >\n> > > > I don’t think we need to stipulate the query cancellation. Anyway I\n> > > > guess the facts neither that we don’t stipulate anything about query\n> > > > cancellation now nor that postgres_fdw might not be cancellable in\n> > > > some situations now are not a reason for not supporting query\n> > > > cancellation. If it's a desirable behavior and users want it, we need\n> > > > to put an effort to support it as much as possible like we’ve done in\n> > > > postgres_fdw. Some FDWs unfortunately might not be able to support it\n> > > > only by their functionality but it would be good if we can achieve\n> > > > that by combination of PostgreSQL and FDW plugins.\n> > >\n> > > Let me comment on this a bit; this is a bit dangerous idea, I'm afraid. We need to pay attention to the FDW interface and its documentation so that FDW developers can implement what we consider important -- query cancellation in your discussion. \"postgres_fdw is OK, so the interface is good\" can create interfaces that other FDW developers can't use. That's what Tomas Vondra pointed out several years ago.\n> >\n> > I suspect the story is somewhat different. libpq fortunately supports\n> > asynchronous execution, but when it comes to canceling the foreign\n> > transaction resolution I think basically all FDW plugins are in the\n> > same situation at this time. We can choose whether to make it\n> > cancellable or not. According to the discussion so far, it completely\n> > depends on the architecture of this feature. So my point is whether\n> > it's worth to have this functionality for users and whether users want\n> > it, not whether postgres_fdw is ok.\n> >\n>\n> I've thought again about the idea that once the backend failed to\n> resolve a foreign transaction it leaves to a resolver process. With\n> this idea, the backend process perform the 2nd phase of 2PC only once.\n> If an error happens during resolution it leaves to a resolver process\n> and returns an error to the client. We used to use this idea in the\n> previous patches and it’s discussed sometimes.\n>\n> First of all, this idea doesn’t resolve the problem of error handling\n> that the transaction could return an error to the client in spite of\n> having been committed the local transaction. There is an argument that\n> this behavior could also happen even in a single server environment\n> but I guess the situation is slightly different. Basically what the\n> transaction does after the commit is cleanup. An error could happen\n> during cleanup but if it happens it’s likely due to a bug of\n> something wrong inside PostgreSQL or OS. On the other hand, during and\n> after resolution the transaction does major works such as connecting a\n> foreign server, sending an SQL, getting the result, and writing a WAL\n> to remove the entry. These are more likely to happen an error.\n>\n> Also, with this idea, the client needs to check if the error got from\n> the server is really true because the local transaction might have\n> been committed. Although this could happen even in a single server\n> environment how many users check that in practice? If a server\n> crashes, subsequent transactions end up failing due to a network\n> connection error but it seems hard to distinguish between such a real\n> error and the fake error.\n>\n> Moreover, it’s questionable in terms of extensibility. We would not\n> able to support keeping waiting for distributed transactions to\n> complete even if an error happens, like synchronous replication. The\n> user might want to wait in case where the failure is temporary such as\n> temporary network disconnection. Trying resolution only once seems to\n> have cons of both asynchronous and synchronous resolutions.\n>\n> So I’m thinking that with this idea the user will need to change their\n> application so that it checks if the error they got is really true,\n> which is cumbersome for users. Also, it seems to me we need to\n> circumspectly discuss whether this idea could weaken extensibility.\n>\n>\n> Anyway, according to the discussion, it seems to me that we got a\n> consensus so far that the backend process prepares all foreign\n> transactions and a resolver process is necessary to resolve in-doubt\n> transaction in background. So I’ve changed the patch set as follows.\n> Applying these all patches, we can support asynchronous foreign\n> transaction resolution. That is, at transaction commit the backend\n> process prepares all foreign transactions, and then commit the local\n> transaction. After that, it returns OK of commit to the client while\n> leaving the prepared foreign transaction to a resolver process. A\n> resolver process fetches the foreign transactions to resolve and\n> resolves them in background. Since the 2nd phase of 2PC is performed\n> asynchronously a transaction that wants to see the previous\n> transaction result needs to check its status.\n>\n> Here is brief explaination for each patches:\n>\n> v27-0001-Introduce-transaction-manager-for-foreign-transa.patch\n>\n> This commit adds the basic foreign transaction manager,\n> CommitForeignTransaction, and RollbackForeignTransaction API. These\n> APIs support only one-phase. With this change, FDW is able to control\n> its transaction using the foreign transaction manager, not using\n> XactCallback.\n>\n> v27-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch\n>\n> This commit implements both CommitForeignTransaction and\n> RollbackForeignTransaction APIs in postgres_fdw. Note that since\n> PREPARE TRANSACTION is still not supported there is nothing the user\n> newly is able to do.\n>\n> v27-0003-Recreate-RemoveForeignServerById.patch\n>\n> This commit recreates RemoveForeignServerById that was removed by\n> b1d32d3e3. This is necessary because we need to check if there is a\n> foreign transaction involved with the foreign server that is about to\n> be removed.\n>\n> v27-0004-Add-PrepareForeignTransaction-API.patch\n>\n> This commit adds prepared foreign transaction support including WAL\n> logging and recovery, and PrepareForeignTransaction API. With this\n> change, the user is able to do 'PREPARE TRANSACTION’ and\n> 'COMMIT/ROLLBACK PREPARED' commands on the transaction that involves\n> foreign servers. But note that COMMIT/ROLLBACK PREPARED ends only the\n> local transaction. It doesn't do anything for foreign transactions.\n> Therefore, the user needs to resolve foreign transactions manually by\n> executing the pg_resolve_foreign_xacts() SQL function which is also\n> introduced by this commit.\n>\n> v27-0005-postgres_fdw-supports-prepare-API.patch\n>\n> This commit implements PrepareForeignTransaction API and makes\n> CommitForeignTransaction and RollbackForeignTransaction supports\n> two-phase commit.\n>\n> v27-0006-Add-GetPrepareId-API.patch\n>\n> This commit adds GetPrepareID API.\n>\n> v27-0007-Introduce-foreign-transaction-launcher-and-resol.patch\n>\n> This commit introduces foreign transaction resolver and launcher\n> processes. With this change, the user doesn’t need to manually execute\n> pg_resolve_foreign_xacts() function to resolve foreign transactions\n> prepared by PREPARE TRANSACTION and left by COMMIT/ROLLBACK PREPARED.\n> Instead, a resolver process automatically resolves them in background.\n>\n> v27-0008-Prepare-foreign-transactions-at-commit-time.patch\n>\n> With this commit, the transaction prepares foreign transactions marked\n> as modified at transaction commit if foreign_twophase_commit is\n> ‘required’. Previously the user needs to do PREPARE TRANSACTION and\n> COMMIT/ROLLBACK PREPARED to use 2PC but it enables us to use 2PC\n> transparently to the user. But the transaction returns OK of commit to\n> the client after committing the local transaction and notifying the\n> resolver process, without waits. Foreign transactions are\n> asynchronously resolved by the resolver process.\n>\n> v27-0009-postgres_fdw-marks-foreign-transaction-as-modifi.patch\n>\n> With this commit, the transactions started via postgres_fdw are marked\n> as modified, which is necessary to use 2PC.\n>\n> v27-0010-Documentation-update.patch\n> v27-0011-Add-regression-tests-for-foreign-twophase-commit.patch\n>\n> Documentation update and regression tests.\n>\n> The missing piece from the previous version patch is synchronously\n> transaction resolution. In the previous patch, foreign transactions\n> are synchronously resolved by a resolver process. But since it's under\n> discussion whether this is a good approach and I'm considering\n> optimizing the logic it’s not included in the current patch set.\n>\n>\n\nCfbot reported an error. I've attached the updated version patch set\nto make cfbot happy.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Sun, 8 Nov 2020 14:11:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sun, Nov 8, 2020 at 2:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 5, 2020 at 12:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 22, 2020 at 10:39 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Wed, 21 Oct 2020 at 18:33, tsunakawa.takay@fujitsu.com\n> > > <tsunakawa.takay@fujitsu.com> wrote:\n> > > >\n> > > > From: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > > > So what's your opinion?\n> > > >\n> > > > My opinion is simple and has not changed. Let's clarify and refine the design first in the following areas (others may have pointed out something else too, but I don't remember), before going deeper into the code review.\n> > > >\n> > > > * FDW interface\n> > > > New functions so that other FDWs can really implement. Currently, XA seems to be the only model we can rely on to validate the FDW interface.\n> > > > What FDW function would call what XA function(s)? What should be the arguments for the FEW functions?\n> > >\n> > > I guess since FDW interfaces may be affected by the feature\n> > > architecture we can discuss later.\n> > >\n> > > > * Performance\n> > > > Parallel prepare and commits on the client backend. The current implementation is untolerable and should not be the first release quality. I proposed the idea.\n> > > > (If you insist you don't want to anything about this, I have to think you're just rushing for the patch commit. I want to keep Postgres's reputation.)\n> > >\n> > > What is in your mind regarding the implementation of parallel prepare\n> > > and commit? Given that some FDW plugins don't support asynchronous\n> > > execution I guess we need to use parallel workers or something. That\n> > > is, the backend process launches parallel workers to\n> > > prepare/commit/rollback foreign transactions in parallel. I don't deny\n> > > this approach but it'll definitely make the feature complex and needs\n> > > more codes.\n> > >\n> > > My point is a small start and keeping simple the first version. Even\n> > > if we need one or more years for this feature, I think that\n> > > introducing the simple and minimum functionality as the first version\n> > > to the core still has benefits. We will be able to have the\n> > > opportunity to get real feedback from users and to fix bugs in the\n> > > main infrastructure before making it complex. In this sense, the patch\n> > > having the backend return without waits for resolution after the local\n> > > commit would be a good start as the first version (i.g., up to\n> > > applying v26-0006 patch). Anyway, the architecture should be\n> > > extensible enough for future improvements.\n> > >\n> > > For the performance improvements, we will be able to support\n> > > asynchronous and/or prepare/commit/rollback. Moreover, having multiple\n> > > resolver processes on one database would also help get better\n> > > through-put. For the user who needs much better through-put, the user\n> > > also can select not to wait for resolution after the local commit,\n> > > like synchronous_commit = ‘local’ in replication.\n> > >\n> > > > As part of this, I'd like to see the 2PC's message flow and disk writes (via email and/or on the following wiki.) That helps evaluate the 2PC performance, because it's hard to figure it out in the code of a large patch set. I'm simply imagining what is typically written in database textbooks and research papers. I'm asking this because I saw some discussion in this thread that some new WAL records are added. I was worried that transactions have to write WAL records other than prepare and commit unlike textbook implementations.\n> > > >\n> > > > Atomic Commit of Distributed Transactions\n> > > > https://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\n> > >\n> > > Understood. I'll add an explanation about the message flow and disk\n> > > writes to the wiki page.\n> >\n> > Done.\n> >\n> > >\n> > > We need to consider the point of error handling during resolving\n> > > foreign transactions too.\n> > >\n> > > >\n> > > > > I don’t think we need to stipulate the query cancellation. Anyway I\n> > > > > guess the facts neither that we don’t stipulate anything about query\n> > > > > cancellation now nor that postgres_fdw might not be cancellable in\n> > > > > some situations now are not a reason for not supporting query\n> > > > > cancellation. If it's a desirable behavior and users want it, we need\n> > > > > to put an effort to support it as much as possible like we’ve done in\n> > > > > postgres_fdw. Some FDWs unfortunately might not be able to support it\n> > > > > only by their functionality but it would be good if we can achieve\n> > > > > that by combination of PostgreSQL and FDW plugins.\n> > > >\n> > > > Let me comment on this a bit; this is a bit dangerous idea, I'm afraid. We need to pay attention to the FDW interface and its documentation so that FDW developers can implement what we consider important -- query cancellation in your discussion. \"postgres_fdw is OK, so the interface is good\" can create interfaces that other FDW developers can't use. That's what Tomas Vondra pointed out several years ago.\n> > >\n> > > I suspect the story is somewhat different. libpq fortunately supports\n> > > asynchronous execution, but when it comes to canceling the foreign\n> > > transaction resolution I think basically all FDW plugins are in the\n> > > same situation at this time. We can choose whether to make it\n> > > cancellable or not. According to the discussion so far, it completely\n> > > depends on the architecture of this feature. So my point is whether\n> > > it's worth to have this functionality for users and whether users want\n> > > it, not whether postgres_fdw is ok.\n> > >\n> >\n> > I've thought again about the idea that once the backend failed to\n> > resolve a foreign transaction it leaves to a resolver process. With\n> > this idea, the backend process perform the 2nd phase of 2PC only once.\n> > If an error happens during resolution it leaves to a resolver process\n> > and returns an error to the client. We used to use this idea in the\n> > previous patches and it’s discussed sometimes.\n> >\n> > First of all, this idea doesn’t resolve the problem of error handling\n> > that the transaction could return an error to the client in spite of\n> > having been committed the local transaction. There is an argument that\n> > this behavior could also happen even in a single server environment\n> > but I guess the situation is slightly different. Basically what the\n> > transaction does after the commit is cleanup. An error could happen\n> > during cleanup but if it happens it’s likely due to a bug of\n> > something wrong inside PostgreSQL or OS. On the other hand, during and\n> > after resolution the transaction does major works such as connecting a\n> > foreign server, sending an SQL, getting the result, and writing a WAL\n> > to remove the entry. These are more likely to happen an error.\n> >\n> > Also, with this idea, the client needs to check if the error got from\n> > the server is really true because the local transaction might have\n> > been committed. Although this could happen even in a single server\n> > environment how many users check that in practice? If a server\n> > crashes, subsequent transactions end up failing due to a network\n> > connection error but it seems hard to distinguish between such a real\n> > error and the fake error.\n> >\n> > Moreover, it’s questionable in terms of extensibility. We would not\n> > able to support keeping waiting for distributed transactions to\n> > complete even if an error happens, like synchronous replication. The\n> > user might want to wait in case where the failure is temporary such as\n> > temporary network disconnection. Trying resolution only once seems to\n> > have cons of both asynchronous and synchronous resolutions.\n> >\n> > So I’m thinking that with this idea the user will need to change their\n> > application so that it checks if the error they got is really true,\n> > which is cumbersome for users. Also, it seems to me we need to\n> > circumspectly discuss whether this idea could weaken extensibility.\n> >\n> >\n> > Anyway, according to the discussion, it seems to me that we got a\n> > consensus so far that the backend process prepares all foreign\n> > transactions and a resolver process is necessary to resolve in-doubt\n> > transaction in background. So I’ve changed the patch set as follows.\n> > Applying these all patches, we can support asynchronous foreign\n> > transaction resolution. That is, at transaction commit the backend\n> > process prepares all foreign transactions, and then commit the local\n> > transaction. After that, it returns OK of commit to the client while\n> > leaving the prepared foreign transaction to a resolver process. A\n> > resolver process fetches the foreign transactions to resolve and\n> > resolves them in background. Since the 2nd phase of 2PC is performed\n> > asynchronously a transaction that wants to see the previous\n> > transaction result needs to check its status.\n> >\n> > Here is brief explaination for each patches:\n> >\n> > v27-0001-Introduce-transaction-manager-for-foreign-transa.patch\n> >\n> > This commit adds the basic foreign transaction manager,\n> > CommitForeignTransaction, and RollbackForeignTransaction API. These\n> > APIs support only one-phase. With this change, FDW is able to control\n> > its transaction using the foreign transaction manager, not using\n> > XactCallback.\n> >\n> > v27-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch\n> >\n> > This commit implements both CommitForeignTransaction and\n> > RollbackForeignTransaction APIs in postgres_fdw. Note that since\n> > PREPARE TRANSACTION is still not supported there is nothing the user\n> > newly is able to do.\n> >\n> > v27-0003-Recreate-RemoveForeignServerById.patch\n> >\n> > This commit recreates RemoveForeignServerById that was removed by\n> > b1d32d3e3. This is necessary because we need to check if there is a\n> > foreign transaction involved with the foreign server that is about to\n> > be removed.\n> >\n> > v27-0004-Add-PrepareForeignTransaction-API.patch\n> >\n> > This commit adds prepared foreign transaction support including WAL\n> > logging and recovery, and PrepareForeignTransaction API. With this\n> > change, the user is able to do 'PREPARE TRANSACTION’ and\n> > 'COMMIT/ROLLBACK PREPARED' commands on the transaction that involves\n> > foreign servers. But note that COMMIT/ROLLBACK PREPARED ends only the\n> > local transaction. It doesn't do anything for foreign transactions.\n> > Therefore, the user needs to resolve foreign transactions manually by\n> > executing the pg_resolve_foreign_xacts() SQL function which is also\n> > introduced by this commit.\n> >\n> > v27-0005-postgres_fdw-supports-prepare-API.patch\n> >\n> > This commit implements PrepareForeignTransaction API and makes\n> > CommitForeignTransaction and RollbackForeignTransaction supports\n> > two-phase commit.\n> >\n> > v27-0006-Add-GetPrepareId-API.patch\n> >\n> > This commit adds GetPrepareID API.\n> >\n> > v27-0007-Introduce-foreign-transaction-launcher-and-resol.patch\n> >\n> > This commit introduces foreign transaction resolver and launcher\n> > processes. With this change, the user doesn’t need to manually execute\n> > pg_resolve_foreign_xacts() function to resolve foreign transactions\n> > prepared by PREPARE TRANSACTION and left by COMMIT/ROLLBACK PREPARED.\n> > Instead, a resolver process automatically resolves them in background.\n> >\n> > v27-0008-Prepare-foreign-transactions-at-commit-time.patch\n> >\n> > With this commit, the transaction prepares foreign transactions marked\n> > as modified at transaction commit if foreign_twophase_commit is\n> > ‘required’. Previously the user needs to do PREPARE TRANSACTION and\n> > COMMIT/ROLLBACK PREPARED to use 2PC but it enables us to use 2PC\n> > transparently to the user. But the transaction returns OK of commit to\n> > the client after committing the local transaction and notifying the\n> > resolver process, without waits. Foreign transactions are\n> > asynchronously resolved by the resolver process.\n> >\n> > v27-0009-postgres_fdw-marks-foreign-transaction-as-modifi.patch\n> >\n> > With this commit, the transactions started via postgres_fdw are marked\n> > as modified, which is necessary to use 2PC.\n> >\n> > v27-0010-Documentation-update.patch\n> > v27-0011-Add-regression-tests-for-foreign-twophase-commit.patch\n> >\n> > Documentation update and regression tests.\n> >\n> > The missing piece from the previous version patch is synchronously\n> > transaction resolution. In the previous patch, foreign transactions\n> > are synchronously resolved by a resolver process. But since it's under\n> > discussion whether this is a good approach and I'm considering\n> > optimizing the logic it’s not included in the current patch set.\n> >\n> >\n>\n> Cfbot reported an error. I've attached the updated version patch set\n> to make cfbot happy.\n\nSince the previous version conflicts with the current HEAD I've\nattached the rebased version patch set.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 25 Nov 2020 21:50:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Since the previous version conflicts with the current HEAD I've\n> attached the rebased version patch set.\n\nRebased the patch set again to the current HEAD.\n\nThe discussion of this patch is very long so here is a short summary\nof the current state:\n\nIt’s still under discussion which approaches are the best for the\ndistributed transaction commit as a building block of built-in sharing\nusing foreign data wrappers.\n\nSince we’re considering that we use this feature for built-in\nsharding, the design depends on the architecture of built-in sharding.\nFor example, with the current patch, the PostgreSQL node that received\na COMMIT from the client works as a coordinator and it commits the\ntransactions using 2PC on all foreign servers involved with the\ntransaction. This approach would be good with the de-centralized\nsharding architecture but not with centralized architecture like the\nGTM node of Postgres-XC and Postgres-XL that is a dedicated component\nthat is responsible for transaction management. Since we don't get a\nconsensus on the built-in sharding architecture yet, it's still an\nopen question that this patch's approach is really good as a building\nblock of the built-in sharding.\n\nOn the other hand, this feature is not necessarily dedicated to the\nbuilt-in sharding. For example, the distributed transaction commit\nthrough FDW is important also when atomically moving data between two\nservers via FDWs. Using a dedicated process or server like GTM could\nbe an over solution. Having the node that received a COMMIT work as a\ncoordinator would be better and straight forward.\n\nThere is no noticeable TODO in the functionality so far covered by\nthis patch set. This patchset adds new FDW APIs to support 2PC,\nintroduces the global transaction manager, and implement those FDW\nAPIs to postgres_fdw. Also, it has regression tests and documentation.\nTransactions on foreign servers involved with the distributed\ntransaction are committed using 2PC. Committing using 2PC is performed\nasynchronously and transparently to the user. Therefore, it doesn’t\nguarantee that transactions on the foreign server are also committed\nwhen the client gets an acknowledgment of COMMIT. The patch doesn't\ncover synchronous foreign transaction commit via 2PC is not covered by\nthis patch as we still need a discussion on the design.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 28 Dec 2020 23:24:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 25, 2020 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Since the previous version conflicts with the current HEAD I've\n> > attached the rebased version patch set.\n>\n> Rebased the patch set again to the current HEAD.\n>\n> The discussion of this patch is very long so here is a short summary\n> of the current state:\n>\n> It’s still under discussion which approaches are the best for the\n> distributed transaction commit as a building block of built-in sharing\n> using foreign data wrappers.\n>\n> Since we’re considering that we use this feature for built-in\n> sharding, the design depends on the architecture of built-in sharding.\n> For example, with the current patch, the PostgreSQL node that received\n> a COMMIT from the client works as a coordinator and it commits the\n> transactions using 2PC on all foreign servers involved with the\n> transaction. This approach would be good with the de-centralized\n> sharding architecture but not with centralized architecture like the\n> GTM node of Postgres-XC and Postgres-XL that is a dedicated component\n> that is responsible for transaction management. Since we don't get a\n> consensus on the built-in sharding architecture yet, it's still an\n> open question that this patch's approach is really good as a building\n> block of the built-in sharding.\n>\n> On the other hand, this feature is not necessarily dedicated to the\n> built-in sharding. For example, the distributed transaction commit\n> through FDW is important also when atomically moving data between two\n> servers via FDWs. Using a dedicated process or server like GTM could\n> be an over solution. Having the node that received a COMMIT work as a\n> coordinator would be better and straight forward.\n>\n> There is no noticeable TODO in the functionality so far covered by\n> this patch set. This patchset adds new FDW APIs to support 2PC,\n> introduces the global transaction manager, and implement those FDW\n> APIs to postgres_fdw. Also, it has regression tests and documentation.\n> Transactions on foreign servers involved with the distributed\n> transaction are committed using 2PC. Committing using 2PC is performed\n> asynchronously and transparently to the user. Therefore, it doesn’t\n> guarantee that transactions on the foreign server are also committed\n> when the client gets an acknowledgment of COMMIT. The patch doesn't\n> cover synchronous foreign transaction commit via 2PC is not covered by\n> this patch as we still need a discussion on the design.\n>\n\nI've attached the rebased patches to make cfbot happy.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 7 Jan 2021 10:44:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\nFor pg-foreign/v31-0004-Add-PrepareForeignTransaction-API.patch :\n\nHowever these functions are not neither committed nor aborted at\n\nI think the double negation was not intentional. Should be 'are neither ...'\n\nFor FdwXactShmemSize(), is another MAXALIGN(size) needed prior to the\nreturn statement ?\n\n+ fdwxact = FdwXactInsertFdwXactEntry(xid, fdw_part);\n\nFor the function name, Fdw and Xact appear twice, each. Maybe one of them\ncan be dropped ?\n\n+ * we don't need to anything for this participant because all\nforeign\n\n'need to' -> 'need to do'\n\n+ else if (TransactionIdDidAbort(xid))\n+ return FDWXACT_STATUS_ABORTING;\n+\nthe 'else' can be omitted since the preceding if would return.\n\n+ if (max_prepared_foreign_xacts <= 0)\n\nI wonder when the value for max_prepared_foreign_xacts would be negative\n(and whether that should be considered an error).\n\nCheers\n\nOn Wed, Jan 6, 2021 at 5:45 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, Dec 28, 2020 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >\n> > On Wed, Nov 25, 2020 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > >\n> > > Since the previous version conflicts with the current HEAD I've\n> > > attached the rebased version patch set.\n> >\n> > Rebased the patch set again to the current HEAD.\n> >\n> > The discussion of this patch is very long so here is a short summary\n> > of the current state:\n> >\n> > It’s still under discussion which approaches are the best for the\n> > distributed transaction commit as a building block of built-in sharing\n> > using foreign data wrappers.\n> >\n> > Since we’re considering that we use this feature for built-in\n> > sharding, the design depends on the architecture of built-in sharding.\n> > For example, with the current patch, the PostgreSQL node that received\n> > a COMMIT from the client works as a coordinator and it commits the\n> > transactions using 2PC on all foreign servers involved with the\n> > transaction. This approach would be good with the de-centralized\n> > sharding architecture but not with centralized architecture like the\n> > GTM node of Postgres-XC and Postgres-XL that is a dedicated component\n> > that is responsible for transaction management. Since we don't get a\n> > consensus on the built-in sharding architecture yet, it's still an\n> > open question that this patch's approach is really good as a building\n> > block of the built-in sharding.\n> >\n> > On the other hand, this feature is not necessarily dedicated to the\n> > built-in sharding. For example, the distributed transaction commit\n> > through FDW is important also when atomically moving data between two\n> > servers via FDWs. Using a dedicated process or server like GTM could\n> > be an over solution. Having the node that received a COMMIT work as a\n> > coordinator would be better and straight forward.\n> >\n> > There is no noticeable TODO in the functionality so far covered by\n> > this patch set. This patchset adds new FDW APIs to support 2PC,\n> > introduces the global transaction manager, and implement those FDW\n> > APIs to postgres_fdw. Also, it has regression tests and documentation.\n> > Transactions on foreign servers involved with the distributed\n> > transaction are committed using 2PC. Committing using 2PC is performed\n> > asynchronously and transparently to the user. Therefore, it doesn’t\n> > guarantee that transactions on the foreign server are also committed\n> > when the client gets an acknowledgment of COMMIT. The patch doesn't\n> > cover synchronous foreign transaction commit via 2PC is not covered by\n> > this patch as we still need a discussion on the design.\n> >\n>\n> I've attached the rebased patches to make cfbot happy.\n>\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n>\n\nHi,For pg-foreign/v31-0004-Add-PrepareForeignTransaction-API.patch :However these functions are not neither committed nor aborted atI think the double negation was not intentional. Should be 'are neither ...'For FdwXactShmemSize(), is another MAXALIGN(size) needed prior to the return statement ?+ fdwxact = FdwXactInsertFdwXactEntry(xid, fdw_part);For the function name, Fdw and Xact appear twice, each. Maybe one of them can be dropped ?+ * we don't need to anything for this participant because all foreign'need to' -> 'need to do'+ else if (TransactionIdDidAbort(xid))+ return FDWXACT_STATUS_ABORTING;+the 'else' can be omitted since the preceding if would return.+ if (max_prepared_foreign_xacts <= 0)I wonder when the value for max_prepared_foreign_xacts would be negative (and whether that should be considered an error).CheersOn Wed, Jan 6, 2021 at 5:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, Dec 28, 2020 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 25, 2020 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Since the previous version conflicts with the current HEAD I've\n> > attached the rebased version patch set.\n>\n> Rebased the patch set again to the current HEAD.\n>\n> The discussion of this patch is very long so here is a short summary\n> of the current state:\n>\n> It’s still under discussion which approaches are the best for the\n> distributed transaction commit as a building block of built-in sharing\n> using foreign data wrappers.\n>\n> Since we’re considering that we use this feature for built-in\n> sharding, the design depends on the architecture of built-in sharding.\n> For example, with the current patch, the PostgreSQL node that received\n> a COMMIT from the client works as a coordinator and it commits the\n> transactions using 2PC on all foreign servers involved with the\n> transaction. This approach would be good with the de-centralized\n> sharding architecture but not with centralized architecture like the\n> GTM node of Postgres-XC and Postgres-XL that is a dedicated component\n> that is responsible for transaction management. Since we don't get a\n> consensus on the built-in sharding architecture yet, it's still an\n> open question that this patch's approach is really good as a building\n> block of the built-in sharding.\n>\n> On the other hand, this feature is not necessarily dedicated to the\n> built-in sharding. For example, the distributed transaction commit\n> through FDW is important also when atomically moving data between two\n> servers via FDWs. Using a dedicated process or server like GTM could\n> be an over solution. Having the node that received a COMMIT work as a\n> coordinator would be better and straight forward.\n>\n> There is no noticeable TODO in the functionality so far covered by\n> this patch set. This patchset adds new FDW APIs to support 2PC,\n> introduces the global transaction manager, and implement those FDW\n> APIs to postgres_fdw. Also, it has regression tests and documentation.\n> Transactions on foreign servers involved with the distributed\n> transaction are committed using 2PC. Committing using 2PC is performed\n> asynchronously and transparently to the user. Therefore, it doesn’t\n> guarantee that transactions on the foreign server are also committed\n> when the client gets an acknowledgment of COMMIT. The patch doesn't\n> cover synchronous foreign transaction commit via 2PC is not covered by\n> this patch as we still need a discussion on the design.\n>\n\nI've attached the rebased patches to make cfbot happy.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 6 Jan 2021 18:45:29 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n\nThank you for reviewing the patch!\n\n> For pg-foreign/v31-0004-Add-PrepareForeignTransaction-API.patch :\n>\n> However these functions are not neither committed nor aborted at\n>\n> I think the double negation was not intentional. Should be 'are neither ...'\n\nFixed.\n\n>\n> For FdwXactShmemSize(), is another MAXALIGN(size) needed prior to the return statement ?\n\nHmm, you mean that we need MAXALIGN(size) after adding the size of\nFdwXactData structs?\n\nSize\nFdwXactShmemSize(void)\n{\n Size size;\n\n /* Size for foreign transaction information array */\n size = offsetof(FdwXactCtlData, fdwxacts);\n size = add_size(size, mul_size(max_prepared_foreign_xacts,\n sizeof(FdwXact)));\n size = MAXALIGN(size);\n size = add_size(size, mul_size(max_prepared_foreign_xacts,\n sizeof(FdwXactData)));\n\n return size;\n}\n\nI don't think we need to do that. Looking at other similar code such\nas TwoPhaseShmemSize() doesn't do that. Why do you think we need that?\n\n>\n> + fdwxact = FdwXactInsertFdwXactEntry(xid, fdw_part);\n>\n> For the function name, Fdw and Xact appear twice, each. Maybe one of them can be dropped ?\n\nAgreed. Changed to FdwXactInsertEntry().\n\n>\n> + * we don't need to anything for this participant because all foreign\n>\n> 'need to' -> 'need to do'\n\nFixed.\n\n>\n> + else if (TransactionIdDidAbort(xid))\n> + return FDWXACT_STATUS_ABORTING;\n> +\n> the 'else' can be omitted since the preceding if would return.\n\nFixed.\n\n>\n> + if (max_prepared_foreign_xacts <= 0)\n>\n> I wonder when the value for max_prepared_foreign_xacts would be negative (and whether that should be considered an error).\n>\n\nFixed to (max_prepared_foreign_xacts == 0)\n\nAttached the updated version patch set.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Jan 2021 14:50:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\nFor v32-0008-Prepare-foreign-transactions-at-commit-time.patch :\n\n+ bool have_notwophase = false;\n\nMaybe name the variable have_no_twophase so that it is easier to read.\n\n+ * Two-phase commit is not required if the number of servers performed\n\nperformed -> performing\n\n+ errmsg(\"cannot process a distributed transaction that has\noperated on a foreign server that does not support two-phase commit\nprotocol\"),\n+ errdetail(\"foreign_twophase_commit is \\'required\\' but the\ntransaction has some foreign servers which are not capable of two-phase\ncommit\")));\n\nThe lines are really long. Please wrap into more lines.\n\n\n\nOn Wed, Jan 13, 2021 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Thu, Jan 7, 2021 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n>\n> Thank you for reviewing the patch!\n>\n> > For pg-foreign/v31-0004-Add-PrepareForeignTransaction-API.patch :\n> >\n> > However these functions are not neither committed nor aborted at\n> >\n> > I think the double negation was not intentional. Should be 'are neither\n> ...'\n>\n> Fixed.\n>\n> >\n> > For FdwXactShmemSize(), is another MAXALIGN(size) needed prior to the\n> return statement ?\n>\n> Hmm, you mean that we need MAXALIGN(size) after adding the size of\n> FdwXactData structs?\n>\n> Size\n> FdwXactShmemSize(void)\n> {\n> Size size;\n>\n> /* Size for foreign transaction information array */\n> size = offsetof(FdwXactCtlData, fdwxacts);\n> size = add_size(size, mul_size(max_prepared_foreign_xacts,\n> sizeof(FdwXact)));\n> size = MAXALIGN(size);\n> size = add_size(size, mul_size(max_prepared_foreign_xacts,\n> sizeof(FdwXactData)));\n>\n> return size;\n> }\n>\n> I don't think we need to do that. Looking at other similar code such\n> as TwoPhaseShmemSize() doesn't do that. Why do you think we need that?\n>\n> >\n> > + fdwxact = FdwXactInsertFdwXactEntry(xid, fdw_part);\n> >\n> > For the function name, Fdw and Xact appear twice, each. Maybe one of\n> them can be dropped ?\n>\n> Agreed. Changed to FdwXactInsertEntry().\n>\n> >\n> > + * we don't need to anything for this participant because all\n> foreign\n> >\n> > 'need to' -> 'need to do'\n>\n> Fixed.\n>\n> >\n> > + else if (TransactionIdDidAbort(xid))\n> > + return FDWXACT_STATUS_ABORTING;\n> > +\n> > the 'else' can be omitted since the preceding if would return.\n>\n> Fixed.\n>\n> >\n> > + if (max_prepared_foreign_xacts <= 0)\n> >\n> > I wonder when the value for max_prepared_foreign_xacts would be negative\n> (and whether that should be considered an error).\n> >\n>\n> Fixed to (max_prepared_foreign_xacts == 0)\n>\n> Attached the updated version patch set.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n>\n\nHi,For v32-0008-Prepare-foreign-transactions-at-commit-time.patch :+ bool have_notwophase = false;Maybe name the variable have_no_twophase so that it is easier to read.+ * Two-phase commit is not required if the number of servers performedperformed -> performing+ errmsg(\"cannot process a distributed transaction that has operated on a foreign server that does not support two-phase commit protocol\"),+ errdetail(\"foreign_twophase_commit is \\'required\\' but the transaction has some foreign servers which are not capable of two-phase commit\")));The lines are really long. Please wrap into more lines.On Wed, Jan 13, 2021 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Thu, Jan 7, 2021 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n\nThank you for reviewing the patch!\n\n> For pg-foreign/v31-0004-Add-PrepareForeignTransaction-API.patch :\n>\n> However these functions are not neither committed nor aborted at\n>\n> I think the double negation was not intentional. Should be 'are neither ...'\n\nFixed.\n\n>\n> For FdwXactShmemSize(), is another MAXALIGN(size) needed prior to the return statement ?\n\nHmm, you mean that we need MAXALIGN(size) after adding the size of\nFdwXactData structs?\n\nSize\nFdwXactShmemSize(void)\n{\n Size size;\n\n /* Size for foreign transaction information array */\n size = offsetof(FdwXactCtlData, fdwxacts);\n size = add_size(size, mul_size(max_prepared_foreign_xacts,\n sizeof(FdwXact)));\n size = MAXALIGN(size);\n size = add_size(size, mul_size(max_prepared_foreign_xacts,\n sizeof(FdwXactData)));\n\n return size;\n}\n\nI don't think we need to do that. Looking at other similar code such\nas TwoPhaseShmemSize() doesn't do that. Why do you think we need that?\n\n>\n> + fdwxact = FdwXactInsertFdwXactEntry(xid, fdw_part);\n>\n> For the function name, Fdw and Xact appear twice, each. Maybe one of them can be dropped ?\n\nAgreed. Changed to FdwXactInsertEntry().\n\n>\n> + * we don't need to anything for this participant because all foreign\n>\n> 'need to' -> 'need to do'\n\nFixed.\n\n>\n> + else if (TransactionIdDidAbort(xid))\n> + return FDWXACT_STATUS_ABORTING;\n> +\n> the 'else' can be omitted since the preceding if would return.\n\nFixed.\n\n>\n> + if (max_prepared_foreign_xacts <= 0)\n>\n> I wonder when the value for max_prepared_foreign_xacts would be negative (and whether that should be considered an error).\n>\n\nFixed to (max_prepared_foreign_xacts == 0)\n\nAttached the updated version patch set.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Jan 2021 11:04:30 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "For v32-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch :\n\n+ entry->changing_xact_state = true;\n...\n+ entry->changing_xact_state = abort_cleanup_failure;\n\nI don't see return statement in between the two assignments. I wonder\nwhy entry->changing_xact_state is set to true, and later being assigned\nagain.\n\nFor v32-0007-Introduce-foreign-transaction-launcher-and-resol.patch :\n\nbq. This commits introduces to new background processes: foreign\n\ncommits introduces to new -> commit introduces two new\n\n+FdwXactExistsXid(TransactionId xid)\n\nSince Xid is the parameter to this method, I think the Xid suffix can be\ndropped from the method name.\n\n+ * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n\nPlease correct year in the next patch set.\n\n+FdwXactLauncherRequestToLaunch(void)\n\nSince the launcher's job is to 'launch', I think the Launcher can be\nomitted from the method name.\n\n+/* Report shared memory space needed by FdwXactRsoverShmemInit */\n+Size\n+FdwXactRslvShmemSize(void)\n\nAre both Rsover and Rslv referring to resolver ? It would be better to use\nwhole word which reduces confusion.\nPlus, FdwXactRsoverShmemInit should be FdwXactRslvShmemInit (or\nFdwXactResolveShmemInit)\n\n+fdwxact_launch_resolver(Oid dbid)\n\nThe above method is not in camel case. It would be better if method names\nare consistent (in casing).\n\n+ errmsg(\"out of foreign transaction resolver slots\"),\n+ errhint(\"You might need to increase\nmax_foreign_transaction_resolvers.\")));\n\nIt would be nice to include the value of max_foreign_xact_resolvers\n\nFor fdwxact_resolver_onexit():\n\n+ LWLockAcquire(FdwXactLock, LW_EXCLUSIVE);\n+ fdwxact->locking_backend = InvalidBackendId;\n+ LWLockRelease(FdwXactLock);\n\nThere is no call to method inside the for loop which may take time. I\nwonder if the lock can be obtained prior to the for loop and released\ncoming out of the for loop.\n\n+FXRslvLoop(void)\n\nPlease use Resolver instead of Rslv\n\n+ FdwXactResolveFdwXacts(held_fdwxacts, nheld);\n\nFdw and Xact are repeated twice each in the method name. Probably the\nmethod name can be made shorter.\n\nCheers\n\nOn Thu, Jan 14, 2021 at 11:04 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> For v32-0008-Prepare-foreign-transactions-at-commit-time.patch :\n>\n> + bool have_notwophase = false;\n>\n> Maybe name the variable have_no_twophase so that it is easier to read.\n>\n> + * Two-phase commit is not required if the number of servers performed\n>\n> performed -> performing\n>\n> + errmsg(\"cannot process a distributed transaction that has\n> operated on a foreign server that does not support two-phase commit\n> protocol\"),\n> + errdetail(\"foreign_twophase_commit is \\'required\\' but\n> the transaction has some foreign servers which are not capable of two-phase\n> commit\")));\n>\n> The lines are really long. Please wrap into more lines.\n>\n>\n>\n> On Wed, Jan 13, 2021 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n>\n>> On Thu, Jan 7, 2021 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>>\n>> Thank you for reviewing the patch!\n>>\n>> > For pg-foreign/v31-0004-Add-PrepareForeignTransaction-API.patch :\n>> >\n>> > However these functions are not neither committed nor aborted at\n>> >\n>> > I think the double negation was not intentional. Should be 'are neither\n>> ...'\n>>\n>> Fixed.\n>>\n>> >\n>> > For FdwXactShmemSize(), is another MAXALIGN(size) needed prior to the\n>> return statement ?\n>>\n>> Hmm, you mean that we need MAXALIGN(size) after adding the size of\n>> FdwXactData structs?\n>>\n>> Size\n>> FdwXactShmemSize(void)\n>> {\n>> Size size;\n>>\n>> /* Size for foreign transaction information array */\n>> size = offsetof(FdwXactCtlData, fdwxacts);\n>> size = add_size(size, mul_size(max_prepared_foreign_xacts,\n>> sizeof(FdwXact)));\n>> size = MAXALIGN(size);\n>> size = add_size(size, mul_size(max_prepared_foreign_xacts,\n>> sizeof(FdwXactData)));\n>>\n>> return size;\n>> }\n>>\n>> I don't think we need to do that. Looking at other similar code such\n>> as TwoPhaseShmemSize() doesn't do that. Why do you think we need that?\n>>\n>> >\n>> > + fdwxact = FdwXactInsertFdwXactEntry(xid, fdw_part);\n>> >\n>> > For the function name, Fdw and Xact appear twice, each. Maybe one of\n>> them can be dropped ?\n>>\n>> Agreed. Changed to FdwXactInsertEntry().\n>>\n>> >\n>> > + * we don't need to anything for this participant because all\n>> foreign\n>> >\n>> > 'need to' -> 'need to do'\n>>\n>> Fixed.\n>>\n>> >\n>> > + else if (TransactionIdDidAbort(xid))\n>> > + return FDWXACT_STATUS_ABORTING;\n>> > +\n>> > the 'else' can be omitted since the preceding if would return.\n>>\n>> Fixed.\n>>\n>> >\n>> > + if (max_prepared_foreign_xacts <= 0)\n>> >\n>> > I wonder when the value for max_prepared_foreign_xacts would be\n>> negative (and whether that should be considered an error).\n>> >\n>>\n>> Fixed to (max_prepared_foreign_xacts == 0)\n>>\n>> Attached the updated version patch set.\n>>\n>> Regards,\n>>\n>> --\n>> Masahiko Sawada\n>> EnterpriseDB: https://www.enterprisedb.com/\n>>\n>\n\nFor v32-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch :+ entry->changing_xact_state = true;...+ entry->changing_xact_state = abort_cleanup_failure;I don't see return statement in between the two assignments. I wonder why entry->changing_xact_state is set to true, and later being assigned again.For v32-0007-Introduce-foreign-transaction-launcher-and-resol.patch :bq. This commits introduces to new background processes: foreigncommits introduces to new -> commit introduces two new+FdwXactExistsXid(TransactionId xid)Since Xid is the parameter to this method, I think the Xid suffix can be dropped from the method name.+ * Portions Copyright (c) 2020, PostgreSQL Global Development GroupPlease correct year in the next patch set.+FdwXactLauncherRequestToLaunch(void)Since the launcher's job is to 'launch', I think the Launcher can be omitted from the method name.+/* Report shared memory space needed by FdwXactRsoverShmemInit */+Size+FdwXactRslvShmemSize(void)Are both Rsover and Rslv referring to resolver ? It would be better to use whole word which reduces confusion.Plus, FdwXactRsoverShmemInit should be FdwXactRslvShmemInit (or FdwXactResolveShmemInit)+fdwxact_launch_resolver(Oid dbid)The above method is not in camel case. It would be better if method names are consistent (in casing).+ errmsg(\"out of foreign transaction resolver slots\"),+ errhint(\"You might need to increase max_foreign_transaction_resolvers.\")));It would be nice to include the value of max_foreign_xact_resolversFor fdwxact_resolver_onexit():+ LWLockAcquire(FdwXactLock, LW_EXCLUSIVE);+ fdwxact->locking_backend = InvalidBackendId;+ LWLockRelease(FdwXactLock);There is no call to method inside the for loop which may take time. I wonder if the lock can be obtained prior to the for loop and released coming out of the for loop.+FXRslvLoop(void)Please use Resolver instead of Rslv+ FdwXactResolveFdwXacts(held_fdwxacts, nheld);Fdw and Xact are repeated twice each in the method name. Probably the method name can be made shorter.CheersOn Thu, Jan 14, 2021 at 11:04 AM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,For v32-0008-Prepare-foreign-transactions-at-commit-time.patch :+ bool have_notwophase = false;Maybe name the variable have_no_twophase so that it is easier to read.+ * Two-phase commit is not required if the number of servers performedperformed -> performing+ errmsg(\"cannot process a distributed transaction that has operated on a foreign server that does not support two-phase commit protocol\"),+ errdetail(\"foreign_twophase_commit is \\'required\\' but the transaction has some foreign servers which are not capable of two-phase commit\")));The lines are really long. Please wrap into more lines.On Wed, Jan 13, 2021 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Thu, Jan 7, 2021 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n\nThank you for reviewing the patch!\n\n> For pg-foreign/v31-0004-Add-PrepareForeignTransaction-API.patch :\n>\n> However these functions are not neither committed nor aborted at\n>\n> I think the double negation was not intentional. Should be 'are neither ...'\n\nFixed.\n\n>\n> For FdwXactShmemSize(), is another MAXALIGN(size) needed prior to the return statement ?\n\nHmm, you mean that we need MAXALIGN(size) after adding the size of\nFdwXactData structs?\n\nSize\nFdwXactShmemSize(void)\n{\n Size size;\n\n /* Size for foreign transaction information array */\n size = offsetof(FdwXactCtlData, fdwxacts);\n size = add_size(size, mul_size(max_prepared_foreign_xacts,\n sizeof(FdwXact)));\n size = MAXALIGN(size);\n size = add_size(size, mul_size(max_prepared_foreign_xacts,\n sizeof(FdwXactData)));\n\n return size;\n}\n\nI don't think we need to do that. Looking at other similar code such\nas TwoPhaseShmemSize() doesn't do that. Why do you think we need that?\n\n>\n> + fdwxact = FdwXactInsertFdwXactEntry(xid, fdw_part);\n>\n> For the function name, Fdw and Xact appear twice, each. Maybe one of them can be dropped ?\n\nAgreed. Changed to FdwXactInsertEntry().\n\n>\n> + * we don't need to anything for this participant because all foreign\n>\n> 'need to' -> 'need to do'\n\nFixed.\n\n>\n> + else if (TransactionIdDidAbort(xid))\n> + return FDWXACT_STATUS_ABORTING;\n> +\n> the 'else' can be omitted since the preceding if would return.\n\nFixed.\n\n>\n> + if (max_prepared_foreign_xacts <= 0)\n>\n> I wonder when the value for max_prepared_foreign_xacts would be negative (and whether that should be considered an error).\n>\n\nFixed to (max_prepared_foreign_xacts == 0)\n\nAttached the updated version patch set.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Jan 2021 14:46:40 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jan 15, 2021 at 4:03 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> For v32-0008-Prepare-foreign-transactions-at-commit-time.patch :\n\nThank you for reviewing the patch!\n\n>\n> + bool have_notwophase = false;\n>\n> Maybe name the variable have_no_twophase so that it is easier to read.\n\nFixed.\n\n>\n> + * Two-phase commit is not required if the number of servers performed\n>\n> performed -> performing\n\nFixed.\n\n>\n> + errmsg(\"cannot process a distributed transaction that has operated on a foreign server that does not support two-phase commit protocol\"),\n> + errdetail(\"foreign_twophase_commit is \\'required\\' but the transaction has some foreign servers which are not capable of two-phase commit\")));\n>\n> The lines are really long. Please wrap into more lines.\n\nHmm, we can do that but if we do that, it makes grepping by the error\nmessage hard. Please refer to the documentation about the formatting\nguideline[1]:\n\nLimit line lengths so that the code is readable in an 80-column\nwindow. (This doesn't mean that you must never go past 80 columns. For\ninstance, breaking a long error message string in arbitrary places\njust to keep the code within 80 columns is probably not a net gain in\nreadability.)\n\nThese changes have been made in the local branch. I'll post the\nupdated patch set after incorporating all the comments.\n\nRegards,\n\n[1] https://www.postgresql.org/docs/devel/source-format.html\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 15 Jan 2021 14:17:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\nFor v32-0004-Add-PrepareForeignTransaction-API.patch :\n\n+ * Whenever a foreign transaction is processed, the corresponding FdwXact\n+ * entry is update. To avoid holding the lock during transaction\nprocessing\n+ * which may take an unpredicatable time the in-memory data of foreign\n\nentry is update -> entry is updated\n\nunpredictable -> unpredictable\n\n+ int nlefts = 0;\n\nnlefts -> nremaining\n\n+ elog(DEBUG1, \"left %u foreign transactions\", nlefts);\n\nThe message can be phrased as \"%u foreign transactions remaining\"\n\n+FdwXactResolveFdwXacts(int *fdwxact_idxs, int nfdwxacts)\n\nFdw and Xact are repeated. Seems one should suffice. How about naming the\nmethod FdwXactResolveTransactions() ?\nSimilar comment for FdwXactResolveOneFdwXact(FdwXact fdwxact)\n\nFor get_fdwxact():\n\n+ /* This entry matches the condition */\n+ found = true;\n+ break;\n\nInstead of breaking and returning, you can return within the loop directly.\n\nCheers\n\nOn Thu, Jan 14, 2021 at 9:17 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Fri, Jan 15, 2021 at 4:03 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > For v32-0008-Prepare-foreign-transactions-at-commit-time.patch :\n>\n> Thank you for reviewing the patch!\n>\n> >\n> > + bool have_notwophase = false;\n> >\n> > Maybe name the variable have_no_twophase so that it is easier to read.\n>\n> Fixed.\n>\n> >\n> > + * Two-phase commit is not required if the number of servers\n> performed\n> >\n> > performed -> performing\n>\n> Fixed.\n>\n> >\n> > + errmsg(\"cannot process a distributed transaction that\n> has operated on a foreign server that does not support two-phase commit\n> protocol\"),\n> > + errdetail(\"foreign_twophase_commit is \\'required\\' but\n> the transaction has some foreign servers which are not capable of two-phase\n> commit\")));\n> >\n> > The lines are really long. Please wrap into more lines.\n>\n> Hmm, we can do that but if we do that, it makes grepping by the error\n> message hard. Please refer to the documentation about the formatting\n> guideline[1]:\n>\n> Limit line lengths so that the code is readable in an 80-column\n> window. (This doesn't mean that you must never go past 80 columns. For\n> instance, breaking a long error message string in arbitrary places\n> just to keep the code within 80 columns is probably not a net gain in\n> readability.)\n>\n> These changes have been made in the local branch. I'll post the\n> updated patch set after incorporating all the comments.\n>\n> Regards,\n>\n> [1] https://www.postgresql.org/docs/devel/source-format.html\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n>\n\nHi,For v32-0004-Add-PrepareForeignTransaction-API.patch :+ * Whenever a foreign transaction is processed, the corresponding FdwXact+ * entry is update. To avoid holding the lock during transaction processing+ * which may take an unpredicatable time the in-memory data of foreignentry is update -> entry is updatedunpredictable -> unpredictable+ int nlefts = 0;nlefts -> nremaining+ elog(DEBUG1, \"left %u foreign transactions\", nlefts);The message can be phrased as \"%u foreign transactions remaining\"+FdwXactResolveFdwXacts(int *fdwxact_idxs, int nfdwxacts)Fdw and Xact are repeated. Seems one should suffice. How about naming the method FdwXactResolveTransactions() ?Similar comment for FdwXactResolveOneFdwXact(FdwXact fdwxact)For get_fdwxact():+ /* This entry matches the condition */+ found = true;+ break;Instead of breaking and returning, you can return within the loop directly.CheersOn Thu, Jan 14, 2021 at 9:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Fri, Jan 15, 2021 at 4:03 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> For v32-0008-Prepare-foreign-transactions-at-commit-time.patch :\n\nThank you for reviewing the patch!\n\n>\n> + bool have_notwophase = false;\n>\n> Maybe name the variable have_no_twophase so that it is easier to read.\n\nFixed.\n\n>\n> + * Two-phase commit is not required if the number of servers performed\n>\n> performed -> performing\n\nFixed.\n\n>\n> + errmsg(\"cannot process a distributed transaction that has operated on a foreign server that does not support two-phase commit protocol\"),\n> + errdetail(\"foreign_twophase_commit is \\'required\\' but the transaction has some foreign servers which are not capable of two-phase commit\")));\n>\n> The lines are really long. Please wrap into more lines.\n\nHmm, we can do that but if we do that, it makes grepping by the error\nmessage hard. Please refer to the documentation about the formatting\nguideline[1]:\n\nLimit line lengths so that the code is readable in an 80-column\nwindow. (This doesn't mean that you must never go past 80 columns. For\ninstance, breaking a long error message string in arbitrary places\njust to keep the code within 80 columns is probably not a net gain in\nreadability.)\n\nThese changes have been made in the local branch. I'll post the\nupdated patch set after incorporating all the comments.\n\nRegards,\n\n[1] https://www.postgresql.org/docs/devel/source-format.html\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 15 Jan 2021 08:40:46 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jan 15, 2021 at 7:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> For v32-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch :\n>\n> + entry->changing_xact_state = true;\n> ...\n> + entry->changing_xact_state = abort_cleanup_failure;\n>\n> I don't see return statement in between the two assignments. I wonder why entry->changing_xact_state is set to true, and later being assigned again.\n\nBecause postgresRollbackForeignTransaction() can get called again in\ncase where an error occurred during aborting and cleanup the\ntransaction. For example, if an error occurred when executing ABORT\nTRANSACTION (pgfdw_get_cleanup_result() could emit an ERROR),\npostgresRollbackForeignTransaction() will get called again while\nentry->changing_xact_state is still true. Then the entry will be\ncaught by the following condition and cleaned up:\n\n /*\n * If connection is before starting transaction or is already unsalvageable,\n * do only the cleanup and don't touch it further.\n */\n if (entry->changing_xact_state)\n {\n pgfdw_cleanup_after_transaction(entry);\n return;\n }\n\n>\n> For v32-0007-Introduce-foreign-transaction-launcher-and-resol.patch :\n>\n> bq. This commits introduces to new background processes: foreign\n>\n> commits introduces to new -> commit introduces two new\n\nFixed.\n\n>\n> +FdwXactExistsXid(TransactionId xid)\n>\n> Since Xid is the parameter to this method, I think the Xid suffix can be dropped from the method name.\n\nBut there is already a function named FdwXactExists()?\n\nbool\nFdwXactExists(Oid dbid, Oid serverid, Oid userid)\n\nAs far as I read other code, we already have such functions that have\nthe same functionality but have different arguments. For instance,\nSearchSysCacheExists() and SearchSysCacheExistsAttName(). So I think\nwe can leave as it is but is it better to have like\nFdwXactCheckExistence() and FdwXactCheckExistenceByXid()?\n\n>\n> + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n>\n> Please correct year in the next patch set.\n\nFixed.\n\n>\n> +FdwXactLauncherRequestToLaunch(void)\n>\n> Since the launcher's job is to 'launch', I think the Launcher can be omitted from the method name.\n\nAgreed. How about FdwXactRequestToLaunchResolver()?\n\n>\n> +/* Report shared memory space needed by FdwXactRsoverShmemInit */\n> +Size\n> +FdwXactRslvShmemSize(void)\n>\n> Are both Rsover and Rslv referring to resolver ? It would be better to use whole word which reduces confusion.\n> Plus, FdwXactRsoverShmemInit should be FdwXactRslvShmemInit (or FdwXactResolveShmemInit)\n\nAgreed. I realized that these functions are the launcher's function,\nnot resolver's. So I'd change to FdwXactLauncherShmemSize() and\nFdwXactLauncherShmemInit() respectively.\n\n>\n> +fdwxact_launch_resolver(Oid dbid)\n>\n> The above method is not in camel case. It would be better if method names are consistent (in casing).\n\nFixed.\n\n>\n> + errmsg(\"out of foreign transaction resolver slots\"),\n> + errhint(\"You might need to increase max_foreign_transaction_resolvers.\")));\n>\n> It would be nice to include the value of max_foreign_xact_resolvers\n\nI agree it would be nice but looking at other code we don't include\nthe value in this kind of messages.\n\n>\n> For fdwxact_resolver_onexit():\n>\n> + LWLockAcquire(FdwXactLock, LW_EXCLUSIVE);\n> + fdwxact->locking_backend = InvalidBackendId;\n> + LWLockRelease(FdwXactLock);\n>\n> There is no call to method inside the for loop which may take time. I wonder if the lock can be obtained prior to the for loop and released coming out of the for loop.\n\nAgreed.\n\n>\n> +FXRslvLoop(void)\n>\n> Please use Resolver instead of Rslv\n\nFixed.\n\n>\n> + FdwXactResolveFdwXacts(held_fdwxacts, nheld);\n>\n> Fdw and Xact are repeated twice each in the method name. Probably the method name can be made shorter.\n\nFixed.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 18 Jan 2021 14:54:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi, Masahiko-san:\n\nbq. How about FdwXactRequestToLaunchResolver()?\n\nSounds good to me.\n\nbq. But there is already a function named FdwXactExists()\n\nThen we can leave the function name as it is.\n\nCheers\n\nOn Sun, Jan 17, 2021 at 9:55 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Fri, Jan 15, 2021 at 7:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > For v32-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch :\n> >\n> > + entry->changing_xact_state = true;\n> > ...\n> > + entry->changing_xact_state = abort_cleanup_failure;\n> >\n> > I don't see return statement in between the two assignments. I wonder\n> why entry->changing_xact_state is set to true, and later being assigned\n> again.\n>\n> Because postgresRollbackForeignTransaction() can get called again in\n> case where an error occurred during aborting and cleanup the\n> transaction. For example, if an error occurred when executing ABORT\n> TRANSACTION (pgfdw_get_cleanup_result() could emit an ERROR),\n> postgresRollbackForeignTransaction() will get called again while\n> entry->changing_xact_state is still true. Then the entry will be\n> caught by the following condition and cleaned up:\n>\n> /*\n> * If connection is before starting transaction or is already\n> unsalvageable,\n> * do only the cleanup and don't touch it further.\n> */\n> if (entry->changing_xact_state)\n> {\n> pgfdw_cleanup_after_transaction(entry);\n> return;\n> }\n>\n> >\n> > For v32-0007-Introduce-foreign-transaction-launcher-and-resol.patch :\n> >\n> > bq. This commits introduces to new background processes: foreign\n> >\n> > commits introduces to new -> commit introduces two new\n>\n> Fixed.\n>\n> >\n> > +FdwXactExistsXid(TransactionId xid)\n> >\n> > Since Xid is the parameter to this method, I think the Xid suffix can be\n> dropped from the method name.\n>\n> But there is already a function named FdwXactExists()?\n>\n> bool\n> FdwXactExists(Oid dbid, Oid serverid, Oid userid)\n>\n> As far as I read other code, we already have such functions that have\n> the same functionality but have different arguments. For instance,\n> SearchSysCacheExists() and SearchSysCacheExistsAttName(). So I think\n> we can leave as it is but is it better to have like\n> FdwXactCheckExistence() and FdwXactCheckExistenceByXid()?\n>\n> >\n> > + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n> >\n> > Please correct year in the next patch set.\n>\n> Fixed.\n>\n> >\n> > +FdwXactLauncherRequestToLaunch(void)\n> >\n> > Since the launcher's job is to 'launch', I think the Launcher can be\n> omitted from the method name.\n>\n> Agreed. How about FdwXactRequestToLaunchResolver()?\n>\n> >\n> > +/* Report shared memory space needed by FdwXactRsoverShmemInit */\n> > +Size\n> > +FdwXactRslvShmemSize(void)\n> >\n> > Are both Rsover and Rslv referring to resolver ? It would be better to\n> use whole word which reduces confusion.\n> > Plus, FdwXactRsoverShmemInit should be FdwXactRslvShmemInit (or\n> FdwXactResolveShmemInit)\n>\n> Agreed. I realized that these functions are the launcher's function,\n> not resolver's. So I'd change to FdwXactLauncherShmemSize() and\n> FdwXactLauncherShmemInit() respectively.\n>\n> >\n> > +fdwxact_launch_resolver(Oid dbid)\n> >\n> > The above method is not in camel case. It would be better if method\n> names are consistent (in casing).\n>\n> Fixed.\n>\n> >\n> > + errmsg(\"out of foreign transaction resolver slots\"),\n> > + errhint(\"You might need to increase\n> max_foreign_transaction_resolvers.\")));\n> >\n> > It would be nice to include the value of max_foreign_xact_resolvers\n>\n> I agree it would be nice but looking at other code we don't include\n> the value in this kind of messages.\n>\n> >\n> > For fdwxact_resolver_onexit():\n> >\n> > + LWLockAcquire(FdwXactLock, LW_EXCLUSIVE);\n> > + fdwxact->locking_backend = InvalidBackendId;\n> > + LWLockRelease(FdwXactLock);\n> >\n> > There is no call to method inside the for loop which may take time. I\n> wonder if the lock can be obtained prior to the for loop and released\n> coming out of the for loop.\n>\n> Agreed.\n>\n> >\n> > +FXRslvLoop(void)\n> >\n> > Please use Resolver instead of Rslv\n>\n> Fixed.\n>\n> >\n> > + FdwXactResolveFdwXacts(held_fdwxacts, nheld);\n> >\n> > Fdw and Xact are repeated twice each in the method name. Probably the\n> method name can be made shorter.\n>\n> Fixed.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n>\n\nHi, Masahiko-san:bq. How about FdwXactRequestToLaunchResolver()?Sounds good to me.bq. But there is already a function named FdwXactExists()Then we can leave the function name as it is.CheersOn Sun, Jan 17, 2021 at 9:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Fri, Jan 15, 2021 at 7:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> For v32-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch :\n>\n> + entry->changing_xact_state = true;\n> ...\n> + entry->changing_xact_state = abort_cleanup_failure;\n>\n> I don't see return statement in between the two assignments. I wonder why entry->changing_xact_state is set to true, and later being assigned again.\n\nBecause postgresRollbackForeignTransaction() can get called again in\ncase where an error occurred during aborting and cleanup the\ntransaction. For example, if an error occurred when executing ABORT\nTRANSACTION (pgfdw_get_cleanup_result() could emit an ERROR),\npostgresRollbackForeignTransaction() will get called again while\nentry->changing_xact_state is still true. Then the entry will be\ncaught by the following condition and cleaned up:\n\n /*\n * If connection is before starting transaction or is already unsalvageable,\n * do only the cleanup and don't touch it further.\n */\n if (entry->changing_xact_state)\n {\n pgfdw_cleanup_after_transaction(entry);\n return;\n }\n\n>\n> For v32-0007-Introduce-foreign-transaction-launcher-and-resol.patch :\n>\n> bq. This commits introduces to new background processes: foreign\n>\n> commits introduces to new -> commit introduces two new\n\nFixed.\n\n>\n> +FdwXactExistsXid(TransactionId xid)\n>\n> Since Xid is the parameter to this method, I think the Xid suffix can be dropped from the method name.\n\nBut there is already a function named FdwXactExists()?\n\nbool\nFdwXactExists(Oid dbid, Oid serverid, Oid userid)\n\nAs far as I read other code, we already have such functions that have\nthe same functionality but have different arguments. For instance,\nSearchSysCacheExists() and SearchSysCacheExistsAttName(). So I think\nwe can leave as it is but is it better to have like\nFdwXactCheckExistence() and FdwXactCheckExistenceByXid()?\n\n>\n> + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n>\n> Please correct year in the next patch set.\n\nFixed.\n\n>\n> +FdwXactLauncherRequestToLaunch(void)\n>\n> Since the launcher's job is to 'launch', I think the Launcher can be omitted from the method name.\n\nAgreed. How about FdwXactRequestToLaunchResolver()?\n\n>\n> +/* Report shared memory space needed by FdwXactRsoverShmemInit */\n> +Size\n> +FdwXactRslvShmemSize(void)\n>\n> Are both Rsover and Rslv referring to resolver ? It would be better to use whole word which reduces confusion.\n> Plus, FdwXactRsoverShmemInit should be FdwXactRslvShmemInit (or FdwXactResolveShmemInit)\n\nAgreed. I realized that these functions are the launcher's function,\nnot resolver's. So I'd change to FdwXactLauncherShmemSize() and\nFdwXactLauncherShmemInit() respectively.\n\n>\n> +fdwxact_launch_resolver(Oid dbid)\n>\n> The above method is not in camel case. It would be better if method names are consistent (in casing).\n\nFixed.\n\n>\n> + errmsg(\"out of foreign transaction resolver slots\"),\n> + errhint(\"You might need to increase max_foreign_transaction_resolvers.\")));\n>\n> It would be nice to include the value of max_foreign_xact_resolvers\n\nI agree it would be nice but looking at other code we don't include\nthe value in this kind of messages.\n\n>\n> For fdwxact_resolver_onexit():\n>\n> + LWLockAcquire(FdwXactLock, LW_EXCLUSIVE);\n> + fdwxact->locking_backend = InvalidBackendId;\n> + LWLockRelease(FdwXactLock);\n>\n> There is no call to method inside the for loop which may take time. I wonder if the lock can be obtained prior to the for loop and released coming out of the for loop.\n\nAgreed.\n\n>\n> +FXRslvLoop(void)\n>\n> Please use Resolver instead of Rslv\n\nFixed.\n\n>\n> + FdwXactResolveFdwXacts(held_fdwxacts, nheld);\n>\n> Fdw and Xact are repeated twice each in the method name. Probably the method name can be made shorter.\n\nFixed.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 18 Jan 2021 06:14:34 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/01/18 14:54, Masahiko Sawada wrote:\n> On Fri, Jan 15, 2021 at 7:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>> For v32-0002-postgres_fdw-supports-commit-and-rollback-APIs.patch :\n>>\n>> + entry->changing_xact_state = true;\n>> ...\n>> + entry->changing_xact_state = abort_cleanup_failure;\n>>\n>> I don't see return statement in between the two assignments. I wonder why entry->changing_xact_state is set to true, and later being assigned again.\n> \n> Because postgresRollbackForeignTransaction() can get called again in\n> case where an error occurred during aborting and cleanup the\n> transaction. For example, if an error occurred when executing ABORT\n> TRANSACTION (pgfdw_get_cleanup_result() could emit an ERROR),\n> postgresRollbackForeignTransaction() will get called again while\n> entry->changing_xact_state is still true. Then the entry will be\n> caught by the following condition and cleaned up:\n> \n> /*\n> * If connection is before starting transaction or is already unsalvageable,\n> * do only the cleanup and don't touch it further.\n> */\n> if (entry->changing_xact_state)\n> {\n> pgfdw_cleanup_after_transaction(entry);\n> return;\n> }\n> \n>>\n>> For v32-0007-Introduce-foreign-transaction-launcher-and-resol.patch :\n>>\n>> bq. This commits introduces to new background processes: foreign\n>>\n>> commits introduces to new -> commit introduces two new\n> \n> Fixed.\n> \n>>\n>> +FdwXactExistsXid(TransactionId xid)\n>>\n>> Since Xid is the parameter to this method, I think the Xid suffix can be dropped from the method name.\n> \n> But there is already a function named FdwXactExists()?\n> \n> bool\n> FdwXactExists(Oid dbid, Oid serverid, Oid userid)\n> \n> As far as I read other code, we already have such functions that have\n> the same functionality but have different arguments. For instance,\n> SearchSysCacheExists() and SearchSysCacheExistsAttName(). So I think\n> we can leave as it is but is it better to have like\n> FdwXactCheckExistence() and FdwXactCheckExistenceByXid()?\n> \n>>\n>> + * Portions Copyright (c) 2020, PostgreSQL Global Development Group\n>>\n>> Please correct year in the next patch set.\n> \n> Fixed.\n> \n>>\n>> +FdwXactLauncherRequestToLaunch(void)\n>>\n>> Since the launcher's job is to 'launch', I think the Launcher can be omitted from the method name.\n> \n> Agreed. How about FdwXactRequestToLaunchResolver()?\n> \n>>\n>> +/* Report shared memory space needed by FdwXactRsoverShmemInit */\n>> +Size\n>> +FdwXactRslvShmemSize(void)\n>>\n>> Are both Rsover and Rslv referring to resolver ? It would be better to use whole word which reduces confusion.\n>> Plus, FdwXactRsoverShmemInit should be FdwXactRslvShmemInit (or FdwXactResolveShmemInit)\n> \n> Agreed. I realized that these functions are the launcher's function,\n> not resolver's. So I'd change to FdwXactLauncherShmemSize() and\n> FdwXactLauncherShmemInit() respectively.\n> \n>>\n>> +fdwxact_launch_resolver(Oid dbid)\n>>\n>> The above method is not in camel case. It would be better if method names are consistent (in casing).\n> \n> Fixed.\n> \n>>\n>> + errmsg(\"out of foreign transaction resolver slots\"),\n>> + errhint(\"You might need to increase max_foreign_transaction_resolvers.\")));\n>>\n>> It would be nice to include the value of max_foreign_xact_resolvers\n> \n> I agree it would be nice but looking at other code we don't include\n> the value in this kind of messages.\n> \n>>\n>> For fdwxact_resolver_onexit():\n>>\n>> + LWLockAcquire(FdwXactLock, LW_EXCLUSIVE);\n>> + fdwxact->locking_backend = InvalidBackendId;\n>> + LWLockRelease(FdwXactLock);\n>>\n>> There is no call to method inside the for loop which may take time. I wonder if the lock can be obtained prior to the for loop and released coming out of the for loop.\n> \n> Agreed.\n> \n>>\n>> +FXRslvLoop(void)\n>>\n>> Please use Resolver instead of Rslv\n> \n> Fixed.\n> \n>>\n>> + FdwXactResolveFdwXacts(held_fdwxacts, nheld);\n>>\n>> Fdw and Xact are repeated twice each in the method name. Probably the method name can be made shorter.\n> \n> Fixed.\n\nYou fixed some issues. But maybe you forgot to attach the latest patches?\n\nI'm reading 0001 and 0002 patches to pick up the changes for postgres_fdw that worth applying independent from 2PC feature. If there are such changes, IMO we can apply them in advance, and which would make the patches simpler.\n\n+\tif (PQresultStatus(res) != PGRES_COMMAND_OK)\n+\t\tereport(ERROR, (errmsg(\"could not commit transaction on server %s\",\n+\t\t\t\t\t\t\t frstate->server->servername)));\n\nYou changed the code this way because you want to include the server name in the error message? I agree that it's helpful to report also the server name that caused an error. OTOH, since this change gets rid of call to pgfdw_rerport_error() for the returned PGresult, the reported error message contains less information. If this understanding is right, I don't think that this change is an improvement.\n\nInstead, if the server name should be included in the error message, pgfdw_report_error() should be changed so that it also reports the server name? If we do that, the server name is reported not only when COMMIT fails but also when other commands fail.\n\nOf course, if this change is not essential, we can skip doing this in the first version.\n\n-\t/*\n-\t * Regardless of the event type, we can now mark ourselves as out of the\n-\t * transaction. (Note: if we are here during PRE_COMMIT or PRE_PREPARE,\n-\t * this saves a useless scan of the hashtable during COMMIT or PREPARE.)\n-\t */\n-\txact_got_connection = false;\n\nWith this change, xact_got_connection seems to never be set to false. Doesn't this break pgfdw_subxact_callback() using xact_got_connection?\n\n+\t/* Also reset cursor numbering for next transaction */\n+\tcursor_number = 0;\n\nOriginally this variable is reset to 0 once per transaction end. But with the patch, it's reset to 0 every time when a foreign transaction ends at each connection. This change would be harmless fortunately in practice, but seems not right theoretically.\n\nThis makes me wonder if new FDW API is not good at handling the case where some operations need to be performed once per transaction end.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 27 Jan 2021 10:29:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n> You fixed some issues. But maybe you forgot to attach the latest patches?\n\nYes, I've attached the updated patches.\n\n>\n> I'm reading 0001 and 0002 patches to pick up the changes for postgres_fdw that worth applying independent from 2PC feature. If there are such changes, IMO we can apply them in advance, and which would make the patches simpler.\n\nThank you for reviewing the patches!\n\n>\n> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> + ereport(ERROR, (errmsg(\"could not commit transaction on server %s\",\n> + frstate->server->servername)));\n>\n> You changed the code this way because you want to include the server name in the error message? I agree that it's helpful to report also the server name that caused an error. OTOH, since this change gets rid of call to pgfdw_rerport_error() for the returned PGresult, the reported error message contains less information. If this understanding is right, I don't think that this change is an improvement.\n\nRight. It's better to use do_sql_command() instead.\n\n> Instead, if the server name should be included in the error message, pgfdw_report_error() should be changed so that it also reports the server name? If we do that, the server name is reported not only when COMMIT fails but also when other commands fail.\n>\n> Of course, if this change is not essential, we can skip doing this in the first version.\n\nYes, I think it's not essential for now. We can improve it later if we want.\n\n>\n> - /*\n> - * Regardless of the event type, we can now mark ourselves as out of the\n> - * transaction. (Note: if we are here during PRE_COMMIT or PRE_PREPARE,\n> - * this saves a useless scan of the hashtable during COMMIT or PREPARE.)\n> - */\n> - xact_got_connection = false;\n>\n> With this change, xact_got_connection seems to never be set to false. Doesn't this break pgfdw_subxact_callback() using xact_got_connection?\n\nI think xact_got_connection is set to false in\npgfdw_cleanup_after_transaction() that is called at the end of each\nforeign transaction (i.g., in postgresCommitForeignTransaction() and\npostgresRollbackForeignTransaction()).\n\nBut as you're concerned below, it's reset for each foreign transaction\nend rather than the parent's transaction end.\n\n>\n> + /* Also reset cursor numbering for next transaction */\n> + cursor_number = 0;\n>\n> Originally this variable is reset to 0 once per transaction end. But with the patch, it's reset to 0 every time when a foreign transaction ends at each connection. This change would be harmless fortunately in practice, but seems not right theoretically.\n>\n> This makes me wonder if new FDW API is not good at handling the case where some operations need to be performed once per transaction end.\n\nI think that the problem comes from the fact that FDW needs to use\nboth SubXactCallback and new FDW API.\n\nIf we want to perform some operations at the end of the top\ntransaction per FDW, not per foreign transaction, we will either still\nneed to use XactCallback or need to rethink the FDW API design. But\ngiven that we call commit and rollback FDW API for only foreign\nservers that actually started a transaction, I’m not sure if there are\nsuch operations in practice. IIUC there is not at least from the\nnormal (not-sub) transaction termination perspective.\n\nIIUC xact_got_transaction is used to skip iterating over all cached\nconnections to find open remote (sub) transactions. This is not\nnecessary anymore at least from the normal transaction termination\nperspective. So maybe we can improve it so that it tracks whether any\nof the cached connections opened a subtransaction. That is, we set it\ntrue when we created a savepoint on any connections and set it false\nat the end of pgfdw_subxact_callback() if we see that xact_depth of\nall cached entry is less than or equal to 1 after iterating over all\nentries.\n\nRegarding cursor_number, it essentially needs to be unique at least\nwithin a transaction so we can manage it per transaction or per\nconnection. But the current postgres_fdw rather ensure uniqueness\nacross all connections. So it seems to me that this can be fixed by\nmaking individual connection have cursor_number and resetting it in\npgfdw_cleanup_after_transaction(). I think this can be in a separate\npatch. Or it also could solve this problem that we terminate\nsubtransactions via a FDW API but I don't think it's a good idea.\n\nWhat do you think?\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 27 Jan 2021 14:08:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sat, Jan 16, 2021 at 1:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n\nThank you for reviewing the patch!\n\n> For v32-0004-Add-PrepareForeignTransaction-API.patch :\n>\n> + * Whenever a foreign transaction is processed, the corresponding FdwXact\n> + * entry is update. To avoid holding the lock during transaction processing\n> + * which may take an unpredicatable time the in-memory data of foreign\n>\n> entry is update -> entry is updated\n>\n> unpredictable -> unpredictable\n\nFixed.\n¨\n>\n> + int nlefts = 0;\n>\n> nlefts -> nremaining\n>\n> + elog(DEBUG1, \"left %u foreign transactions\", nlefts);\n>\n> The message can be phrased as \"%u foreign transactions remaining\"\n\nFixed.\n\n>\n> +FdwXactResolveFdwXacts(int *fdwxact_idxs, int nfdwxacts)\n>\n> Fdw and Xact are repeated. Seems one should suffice. How about naming the method FdwXactResolveTransactions() ?\n> Similar comment for FdwXactResolveOneFdwXact(FdwXact fdwxact)\n\nAgreed. I changed to ResolveFdwXacts() and ResolveOneFdwXact()\nrespectively to avoid a long function name.\n\n>\n> For get_fdwxact():\n>\n> + /* This entry matches the condition */\n> + found = true;\n> + break;\n>\n> Instead of breaking and returning, you can return within the loop directly.\n\nFixed.\n\nThose changes are incorporated into the latest version patches[1] I\nsubmitted today.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoBYyA5O%2BFPN4Cs9YWiKjq319BvF5fYmKNsFTZfwTcWjQw%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 27 Jan 2021 16:22:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/01/27 14:08, Masahiko Sawada wrote:\n> On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>> You fixed some issues. But maybe you forgot to attach the latest patches?\n> \n> Yes, I've attached the updated patches.\n\nThanks for updating the patch! I tried to review 0001 and 0002 as the self-contained change.\n\n+ * An FDW that implements both commit and rollback APIs can request to register\n+ * the foreign transaction by FdwXactRegisterXact() to participate it to a\n+ * group of distributed tranasction. The registered foreign transactions are\n+ * identified by OIDs of server and user.\n\nI'm afraid that the combination of OIDs of server and user is not unique. IOW, more than one foreign transactions can have the same combination of OIDs of server and user. For example, the following two SELECT queries start the different foreign transactions but their user OID is the same. OID of user mapping should be used instead of OID of user?\n\n CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user 'postgres');\n CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user 'postgres');\n CREATE TABLE t(i int);\n CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS (table_name 't');\n BEGIN;\n SELECT * FROM ft;\n DROP USER MAPPING FOR postgres SERVER loopback ;\n SELECT * FROM ft;\n COMMIT;\n\n+\t/* Commit foreign transactions if any */\n+\tAtEOXact_FdwXact(true);\n\nDon't we need to pass XACT_EVENT_PARALLEL_PRE_COMMIT or XACT_EVENT_PRE_COMMIT flag? Probably we don't need to do this if postgres_fdw is only user of this new API. But if we make this new API generic one, such flags seem necessary so that some foreign data wrappers might have different behaviors for those flags.\n\nBecause of the same reason as above, AtEOXact_FdwXact() should also be called after CallXactCallbacks(is_parallel_worker ? XACT_EVENT_PARALLEL_COMMIT : XACT_EVENT_COMMIT)?\n\n+\t/*\n+\t * Abort foreign transactions if any. This needs to be done before marking\n+\t * this transaction as not running since FDW's transaction callbacks might\n+\t * assume this transaction is still in progress.\n+\t */\n+\tAtEOXact_FdwXact(false);\n\nSame as above.\n\n+/*\n+ * This function is called at PREPARE TRANSACTION. Since we don't support\n+ * preparing foreign transactions yet, raise an error if the local transaction\n+ * has any foreign transaction.\n+ */\n+void\n+AtPrepare_FdwXact(void)\n+{\n+\tif (FdwXactParticipants != NIL)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+\t\t\t\t errmsg(\"cannot PREPARE a transaction that has operated on foreign tables\")));\n+}\n\nThis means that some foreign data wrappers suppporting the prepare transaction (though I'm not sure if such wappers actually exist or not) cannot use the new API? If we want to allow those wrappers to use new API, AtPrepare_FdwXact() should call the prepare callback and each wrapper should emit an error within the callback if necessary.\n\n+\tforeach(lc, FdwXactParticipants)\n+\t{\n+\t\tFdwXactParticipant *fdw_part = (FdwXactParticipant *) lfirst(lc);\n+\n+\t\tif (fdw_part->server->serverid == serverid &&\n+\t\t\tfdw_part->usermapping->userid == userid)\n\nIsn't this ineffecient when starting lots of foreign transactions because we need to scan all the entries in the list every time?\n\n+static ConnCacheEntry *\n+GetConnectionCacheEntry(Oid umid)\n+{\n+\tbool\t\tfound;\n+\tConnCacheEntry *entry;\n+\tConnCacheKey key;\n+\n+\t/* First time through, initialize connection cache hashtable */\n+\tif (ConnectionHash == NULL)\n+\t{\n+\t\tHASHCTL\t\tctl;\n+\n+\t\tctl.keysize = sizeof(ConnCacheKey);\n+\t\tctl.entrysize = sizeof(ConnCacheEntry);\n+\t\tConnectionHash = hash_create(\"postgres_fdw connections\", 8,\n+\t\t\t\t\t\t\t\t\t &ctl,\n+\t\t\t\t\t\t\t\t\t HASH_ELEM | HASH_BLOBS);\n\nCurrently ConnectionHash is created under TopMemoryContext. With the patch, since GetConnectionCacheEntry() can be called in other places, ConnectionHash may be created under the memory context other than TopMemoryContext? If so, that's safe?\n\n-\t\tif (PQstatus(entry->conn) != CONNECTION_OK ||\n-\t\t\tPQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n-\t\t\tentry->changing_xact_state ||\n-\t\t\tentry->invalidated)\n...\n+\tif (PQstatus(entry->conn) != CONNECTION_OK ||\n+\t\tPQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n+\t\tentry->changing_xact_state)\n\nWhy did you get rid of the condition \"entry->invalidated\"?\n\n\n> \n>>\n>> I'm reading 0001 and 0002 patches to pick up the changes for postgres_fdw that worth applying independent from 2PC feature. If there are such changes, IMO we can apply them in advance, and which would make the patches simpler.\n> \n> Thank you for reviewing the patches!\n> \n>>\n>> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n>> + ereport(ERROR, (errmsg(\"could not commit transaction on server %s\",\n>> + frstate->server->servername)));\n>>\n>> You changed the code this way because you want to include the server name in the error message? I agree that it's helpful to report also the server name that caused an error. OTOH, since this change gets rid of call to pgfdw_rerport_error() for the returned PGresult, the reported error message contains less information. If this understanding is right, I don't think that this change is an improvement.\n> \n> Right. It's better to use do_sql_command() instead.\n> \n>> Instead, if the server name should be included in the error message, pgfdw_report_error() should be changed so that it also reports the server name? If we do that, the server name is reported not only when COMMIT fails but also when other commands fail.\n>>\n>> Of course, if this change is not essential, we can skip doing this in the first version.\n> \n> Yes, I think it's not essential for now. We can improve it later if we want.\n> \n>>\n>> - /*\n>> - * Regardless of the event type, we can now mark ourselves as out of the\n>> - * transaction. (Note: if we are here during PRE_COMMIT or PRE_PREPARE,\n>> - * this saves a useless scan of the hashtable during COMMIT or PREPARE.)\n>> - */\n>> - xact_got_connection = false;\n>>\n>> With this change, xact_got_connection seems to never be set to false. Doesn't this break pgfdw_subxact_callback() using xact_got_connection?\n> \n> I think xact_got_connection is set to false in\n> pgfdw_cleanup_after_transaction() that is called at the end of each\n> foreign transaction (i.g., in postgresCommitForeignTransaction() and\n> postgresRollbackForeignTransaction()).\n> \n> But as you're concerned below, it's reset for each foreign transaction\n> end rather than the parent's transaction end.\n> \n>>\n>> + /* Also reset cursor numbering for next transaction */\n>> + cursor_number = 0;\n>>\n>> Originally this variable is reset to 0 once per transaction end. But with the patch, it's reset to 0 every time when a foreign transaction ends at each connection. This change would be harmless fortunately in practice, but seems not right theoretically.\n>>\n>> This makes me wonder if new FDW API is not good at handling the case where some operations need to be performed once per transaction end.\n> \n> I think that the problem comes from the fact that FDW needs to use\n> both SubXactCallback and new FDW API.\n> \n> If we want to perform some operations at the end of the top\n> transaction per FDW, not per foreign transaction, we will either still\n> need to use XactCallback or need to rethink the FDW API design. But\n> given that we call commit and rollback FDW API for only foreign\n> servers that actually started a transaction, I’m not sure if there are\n> such operations in practice. IIUC there is not at least from the\n> normal (not-sub) transaction termination perspective.\n\nOne feature in my mind that may not match with this new API is to perform transaction commits on multiple servers in parallel. That's something like the following. As far as I can recall, another proposed version of 2pc on postgres_fdw patch included that feature. If we want to implement this to increase the performance of transaction commit in the future, I'm afraid that new API will prevent that.\n\n foreach(foreign transactions)\n send commit command\n\n foreach(foreign transactions)\n wait for reply of commit\n\nOn second thought, new per-transaction commit/rollback callback is essential when users or the resolver process want to resolve the specifed foreign transaction, but not essential when backends commit/rollback foreign transactions. That is, even if we add per-transaction new API for users and resolver process, backends can still use CallXactCallbacks() when they commit/rollback foreign transactions. Is this understanding right?\n\n\n> \n> IIUC xact_got_transaction is used to skip iterating over all cached\n> connections to find open remote (sub) transactions. This is not\n> necessary anymore at least from the normal transaction termination\n> perspective. So maybe we can improve it so that it tracks whether any\n> of the cached connections opened a subtransaction. That is, we set it\n> true when we created a savepoint on any connections and set it false\n> at the end of pgfdw_subxact_callback() if we see that xact_depth of\n> all cached entry is less than or equal to 1 after iterating over all\n> entries.\n\nOK.\n\n\n> Regarding cursor_number, it essentially needs to be unique at least\n> within a transaction so we can manage it per transaction or per\n> connection. But the current postgres_fdw rather ensure uniqueness\n> across all connections. So it seems to me that this can be fixed by\n> making individual connection have cursor_number and resetting it in\n> pgfdw_cleanup_after_transaction(). I think this can be in a separate\n> patch.\n\nMaybe, so let's work on this later, at least after we confirm that\nthis change is really necessary.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 2 Feb 2021 17:18:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Feb 2, 2021 at 5:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/01/27 14:08, Masahiko Sawada wrote:\n> > On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >> You fixed some issues. But maybe you forgot to attach the latest patches?\n> >\n> > Yes, I've attached the updated patches.\n>\n> Thanks for updating the patch! I tried to review 0001 and 0002 as the self-contained change.\n>\n> + * An FDW that implements both commit and rollback APIs can request to register\n> + * the foreign transaction by FdwXactRegisterXact() to participate it to a\n> + * group of distributed tranasction. The registered foreign transactions are\n> + * identified by OIDs of server and user.\n>\n> I'm afraid that the combination of OIDs of server and user is not unique. IOW, more than one foreign transactions can have the same combination of OIDs of server and user. For example, the following two SELECT queries start the different foreign transactions but their user OID is the same. OID of user mapping should be used instead of OID of user?\n>\n> CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n> CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user 'postgres');\n> CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user 'postgres');\n> CREATE TABLE t(i int);\n> CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS (table_name 't');\n> BEGIN;\n> SELECT * FROM ft;\n> DROP USER MAPPING FOR postgres SERVER loopback ;\n> SELECT * FROM ft;\n> COMMIT;\n\nGood catch. I've considered using user mapping OID or a pair of user\nmapping OID and server OID as a key of foreign transactions but I\nthink it also has a problem if an FDW caches the connection by pair of\nserver OID and user OID whereas the core identifies them by user\nmapping OID. For instance, mysql_fdw manages connections by pair of\nserver OID and user OID.\n\nFor example, let's consider the following execution:\n\nBEGIN;\nSET ROLE user_A;\nINSERT INTO ft1 VALUES (1);\nSET ROLE user_B;\nINSERT INTO ft1 VALUES (1);\nCOMMIT;\n\nSuppose that an FDW identifies the connections by {server OID, user\nOID} and the core GTM identifies the transactions by user mapping OID,\nand user_A and user_B use the public user mapping to connect server_X.\nIn the FDW, there are two connections identified by {user_A, sever_X}\nand {user_B, server_X} respectively, and therefore opens two\ntransactions on each connection, while GTM has only one FdwXact entry\nbecause the two connections refer to the same user mapping OID. As a\nresult, at the end of the transaction, GTM ends only one foreign\ntransaction, leaving another one.\n\nUsing user mapping OID seems natural to me but I'm concerned that\nchanging role in the middle of transaction is likely to happen than\ndropping the public user mapping but not sure. We would need to find\nmore better way.\n\n>\n> + /* Commit foreign transactions if any */\n> + AtEOXact_FdwXact(true);\n>\n> Don't we need to pass XACT_EVENT_PARALLEL_PRE_COMMIT or XACT_EVENT_PRE_COMMIT flag? Probably we don't need to do this if postgres_fdw is only user of this new API. But if we make this new API generic one, such flags seem necessary so that some foreign data wrappers might have different behaviors for those flags.\n>\n> Because of the same reason as above, AtEOXact_FdwXact() should also be called after CallXactCallbacks(is_parallel_worker ? XACT_EVENT_PARALLEL_COMMIT : XACT_EVENT_COMMIT)?\n\nAgreed.\n\nIn AtEOXact_FdwXact() we call either CommitForeignTransaction() or\nRollbackForeignTransaction() with FDWXACT_FLAG_ONEPHASE flag for each\nforeign transaction. So for example in commit case, we will call new\nFDW APIs in the following order:\n\n1. Call CommitForeignTransaction() with XACT_EVENT_PARALLEL_PRE_COMMIT\nflag and FDWXACT_FLAG_ONEPHASE flag for each foreign transaction.\n2. Commit locally.\n3. Call CommitForeignTransaction() with XACT_EVENT_PARALLEL_COMMIT\nflag and FDWXACT_FLAG_ONEPHASE flag for each foreign transaction.\n\nIn the future when we have a new FDW API to prepare foreign\ntransaction, the sequence will be:\n\n1. Call PrepareForeignTransaction() for each foreign transaction.\n2. Call CommitForeignTransaction() with XACT_EVENT_PARALLEL_PRE_COMMIT\nflag for each foreign transaction.\n3. Commit locally.\n4. Call CommitForeignTransaction() with XACT_EVENT_PARALLEL_COMMIT\nflag for each foreign transaction.\n\nSo we expect FDW that wants to support 2PC not to commit foreign\ntransaction if CommitForeignTransaction() is called with\nXACT_EVENT_PARALLEL_PRE_COMMIT flag and no FDWXACT_FLAG_ONEPHASE flag.\n\n>\n> + /*\n> + * Abort foreign transactions if any. This needs to be done before marking\n> + * this transaction as not running since FDW's transaction callbacks might\n> + * assume this transaction is still in progress.\n> + */\n> + AtEOXact_FdwXact(false);\n>\n> Same as above.\n>\n> +/*\n> + * This function is called at PREPARE TRANSACTION. Since we don't support\n> + * preparing foreign transactions yet, raise an error if the local transaction\n> + * has any foreign transaction.\n> + */\n> +void\n> +AtPrepare_FdwXact(void)\n> +{\n> + if (FdwXactParticipants != NIL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot PREPARE a transaction that has operated on foreign tables\")));\n> +}\n>\n> This means that some foreign data wrappers suppporting the prepare transaction (though I'm not sure if such wappers actually exist or not) cannot use the new API? If we want to allow those wrappers to use new API, AtPrepare_FdwXact() should call the prepare callback and each wrapper should emit an error within the callback if necessary.\n\nI think if we support the prepare callback and allow FDWs to prepare\nforeign transactions, we have to call CommitForeignTransaction() on\nCOMMIT PREPARED for foreign transactions that are associated with the\nlocal prepared transaction. But how can we know which foreign\ntransactions are? Even a client who didn’t do PREPARE TRANSACTION\ncould do COMMIT PREPARED. We would need to store the information of\nwhich foreign transactions are associated with the local transaction\nsomewhere. The 0004 patch introduces WAL logging along with prepare\nAPI and we store that information to a WAL record. I think it’s better\nat this time to disallow PREPARE TRANSACTION when at least one foreign\ntransaction is registered via FDW API.\n\n>\n> + foreach(lc, FdwXactParticipants)\n> + {\n> + FdwXactParticipant *fdw_part = (FdwXactParticipant *) lfirst(lc);\n> +\n> + if (fdw_part->server->serverid == serverid &&\n> + fdw_part->usermapping->userid == userid)\n>\n> Isn't this ineffecient when starting lots of foreign transactions because we need to scan all the entries in the list every time?\n\nAgreed. I'll change it to a hash map.\n\n>\n> +static ConnCacheEntry *\n> +GetConnectionCacheEntry(Oid umid)\n> +{\n> + bool found;\n> + ConnCacheEntry *entry;\n> + ConnCacheKey key;\n> +\n> + /* First time through, initialize connection cache hashtable */\n> + if (ConnectionHash == NULL)\n> + {\n> + HASHCTL ctl;\n> +\n> + ctl.keysize = sizeof(ConnCacheKey);\n> + ctl.entrysize = sizeof(ConnCacheEntry);\n> + ConnectionHash = hash_create(\"postgres_fdw connections\", 8,\n> + &ctl,\n> + HASH_ELEM | HASH_BLOBS);\n>\n> Currently ConnectionHash is created under TopMemoryContext. With the patch, since GetConnectionCacheEntry() can be called in other places, ConnectionHash may be created under the memory context other than TopMemoryContext? If so, that's safe?\n\nhash_create() creates a hash map under TopMemoryContext unless\nHASH_CONTEXT is specified. So I think ConnectionHash is still created\nin the same memory context.\n\n>\n> - if (PQstatus(entry->conn) != CONNECTION_OK ||\n> - PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n> - entry->changing_xact_state ||\n> - entry->invalidated)\n> ...\n> + if (PQstatus(entry->conn) != CONNECTION_OK ||\n> + PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n> + entry->changing_xact_state)\n>\n> Why did you get rid of the condition \"entry->invalidated\"?\n\nMy bad. I'll fix it.\n\n> >\n> > If we want to perform some operations at the end of the top\n> > transaction per FDW, not per foreign transaction, we will either still\n> > need to use XactCallback or need to rethink the FDW API design. But\n> > given that we call commit and rollback FDW API for only foreign\n> > servers that actually started a transaction, I’m not sure if there are\n> > such operations in practice. IIUC there is not at least from the\n> > normal (not-sub) transaction termination perspective.\n>\n> One feature in my mind that may not match with this new API is to perform transaction commits on multiple servers in parallel. That's something like the following. As far as I can recall, another proposed version of 2pc on postgres_fdw patch included that feature. If we want to implement this to increase the performance of transaction commit in the future, I'm afraid that new API will prevent that.\n>\n> foreach(foreign transactions)\n> send commit command\n>\n> foreach(foreign transactions)\n> wait for reply of commit\n\nWhat I'm thinking is to pass a flag, say FDWXACT_ASYNC, to\nCommit/RollbackForeignTransaction() and add a new API to wait for the\noperation to complete, say CompleteForeignTransaction(). If\ncommit/rollback callback in an FDW is called with FDWXACT_ASYNC flag,\nit should send the command and immediately return the handler (e.g.,\nPQsocket() in postgres_fdw). The GTM gathers the handlers and poll\nevents on them. To complete the command, the GTM calls\nCompleteForeignTransaction() to wait for the command to complete.\nPlease refer to XA specification for details (especially xa_complete()\nand TMASYNC flag). A pseudo-code is something like the followings:\n\n foreach (foreign transactions)\n call CommitForeignTransaction(FDWXACT_ASYNC);\n append the returned fd to the array.\n\n while (true)\n {\n poll event on fds;\n call CompleteForeignTransaction() for fd owner;\n if (success)\n remove fd from the array;\n\n if (array is empty)\n break;\n }\n\n>\n> On second thought, new per-transaction commit/rollback callback is essential when users or the resolver process want to resolve the specifed foreign transaction, but not essential when backends commit/rollback foreign transactions. That is, even if we add per-transaction new API for users and resolver process, backends can still use CallXactCallbacks() when they commit/rollback foreign transactions. Is this understanding right?\n\nI haven’t tried that but I think that's possible if we can know\ncommit/rollback callback (e.g., postgresCommitForeignTransaction() etc\nin postgres_fdw) is called via SQL function (pg_resolve_foreign_xact()\nSQL function) or called by the resolver process. That is, we register\nforeign transaction via FdwXactRegisterXact(), don’t do nothing in\npostgresCommit/RollbackForeignTransaction() if these are called by the\nbackend, and perform COMMIT/ROLLBACK in pgfdw_xact_callback() in\nasynchronous manner. On the other hand, if\npostgresCommit/RollbackForeignTransaction() is called via SQL function\nor by the resolver these functions commit/rollback the transaction.\n\n>\n> > Regarding cursor_number, it essentially needs to be unique at least\n> > within a transaction so we can manage it per transaction or per\n> > connection. But the current postgres_fdw rather ensure uniqueness\n> > across all connections. So it seems to me that this can be fixed by\n> > making individual connection have cursor_number and resetting it in\n> > pgfdw_cleanup_after_transaction(). I think this can be in a separate\n> > patch.\n>\n> Maybe, so let's work on this later, at least after we confirm that\n> this change is really necessary.\n\nOkay.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 5 Feb 2021 14:45:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Feb 5, 2021 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Feb 2, 2021 at 5:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2021/01/27 14:08, Masahiko Sawada wrote:\n> > > On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n> > > <masao.fujii@oss.nttdata.com> wrote:\n> > >>\n> > >>\n> > >> You fixed some issues. But maybe you forgot to attach the latest patches?\n> > >\n> > > Yes, I've attached the updated patches.\n> >\n> > Thanks for updating the patch! I tried to review 0001 and 0002 as the self-contained change.\n> >\n> > + * An FDW that implements both commit and rollback APIs can request to register\n> > + * the foreign transaction by FdwXactRegisterXact() to participate it to a\n> > + * group of distributed tranasction. The registered foreign transactions are\n> > + * identified by OIDs of server and user.\n> >\n> > I'm afraid that the combination of OIDs of server and user is not unique. IOW, more than one foreign transactions can have the same combination of OIDs of server and user. For example, the following two SELECT queries start the different foreign transactions but their user OID is the same. OID of user mapping should be used instead of OID of user?\n> >\n> > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n> > CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user 'postgres');\n> > CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user 'postgres');\n> > CREATE TABLE t(i int);\n> > CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS (table_name 't');\n> > BEGIN;\n> > SELECT * FROM ft;\n> > DROP USER MAPPING FOR postgres SERVER loopback ;\n> > SELECT * FROM ft;\n> > COMMIT;\n>\n> Good catch. I've considered using user mapping OID or a pair of user\n> mapping OID and server OID as a key of foreign transactions but I\n> think it also has a problem if an FDW caches the connection by pair of\n> server OID and user OID whereas the core identifies them by user\n> mapping OID. For instance, mysql_fdw manages connections by pair of\n> server OID and user OID.\n>\n> For example, let's consider the following execution:\n>\n> BEGIN;\n> SET ROLE user_A;\n> INSERT INTO ft1 VALUES (1);\n> SET ROLE user_B;\n> INSERT INTO ft1 VALUES (1);\n> COMMIT;\n>\n> Suppose that an FDW identifies the connections by {server OID, user\n> OID} and the core GTM identifies the transactions by user mapping OID,\n> and user_A and user_B use the public user mapping to connect server_X.\n> In the FDW, there are two connections identified by {user_A, sever_X}\n> and {user_B, server_X} respectively, and therefore opens two\n> transactions on each connection, while GTM has only one FdwXact entry\n> because the two connections refer to the same user mapping OID. As a\n> result, at the end of the transaction, GTM ends only one foreign\n> transaction, leaving another one.\n>\n> Using user mapping OID seems natural to me but I'm concerned that\n> changing role in the middle of transaction is likely to happen than\n> dropping the public user mapping but not sure. We would need to find\n> more better way.\n\nAfter more thought, I'm inclined to think it's better to identify\nforeign transactions by user mapping OID. The main reason is, I think\nFDWs that manages connection caches by pair of user OID and server OID\npotentially has a problem with the scenario Fujii-san mentioned. If an\nFDW has to use another user mapping (i.g., connection information) due\nto the currently used user mapping being removed, it would have to\ndisconnect the previous connection because it has to use the same\nconnection cache. But at that time it doesn't know the transaction\nwill be committed or aborted.\n\nAlso, such FDW has the same problem that postgres_fdw used to have; a\nbackend establishes multiple connections with the same connection\ninformation if multiple local users use the public user mapping. Even\nfrom the perspective of foreign transaction management, it more makes\nsense that foreign transactions correspond to the connections to\nforeign servers, not to the local connection information.\n\nI can see that some FDW implementations such as mysql_fdw and\nfirebird_fdw identify connections by pair of server OID and user OID\nbut I think this is because they consulted to old postgres_fdw code. I\nsuspect that there is no use case where FDW needs to identify\nconnections in that way. If the core GTM identifies them by user\nmapping OID, we could enforce those FDWs to change their way but I\nthink that change would be the right improvement.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 11 Feb 2021 22:24:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Feb 11, 2021 at 6:25 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Fri, Feb 5, 2021 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >\n> > On Tue, Feb 2, 2021 at 5:18 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> > >\n> > >\n> > >\n> > > On 2021/01/27 14:08, Masahiko Sawada wrote:\n> > > > On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n> > > > <masao.fujii@oss.nttdata.com> wrote:\n> > > >>\n> > > >>\n> > > >> You fixed some issues. But maybe you forgot to attach the latest\n> patches?\n> > > >\n> > > > Yes, I've attached the updated patches.\n> > >\n> > > Thanks for updating the patch! I tried to review 0001 and 0002 as the\n> self-contained change.\n> > >\n> > > + * An FDW that implements both commit and rollback APIs can request\n> to register\n> > > + * the foreign transaction by FdwXactRegisterXact() to participate it\n> to a\n> > > + * group of distributed tranasction. The registered foreign\n> transactions are\n> > > + * identified by OIDs of server and user.\n> > >\n> > > I'm afraid that the combination of OIDs of server and user is not\n> unique. IOW, more than one foreign transactions can have the same\n> combination of OIDs of server and user. For example, the following two\n> SELECT queries start the different foreign transactions but their user OID\n> is the same. OID of user mapping should be used instead of OID of user?\n> > >\n> > > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n> > > CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user\n> 'postgres');\n> > > CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user\n> 'postgres');\n> > > CREATE TABLE t(i int);\n> > > CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS\n> (table_name 't');\n> > > BEGIN;\n> > > SELECT * FROM ft;\n> > > DROP USER MAPPING FOR postgres SERVER loopback ;\n> > > SELECT * FROM ft;\n> > > COMMIT;\n> >\n> > Good catch. I've considered using user mapping OID or a pair of user\n> > mapping OID and server OID as a key of foreign transactions but I\n> > think it also has a problem if an FDW caches the connection by pair of\n> > server OID and user OID whereas the core identifies them by user\n> > mapping OID. For instance, mysql_fdw manages connections by pair of\n> > server OID and user OID.\n> >\n> > For example, let's consider the following execution:\n> >\n> > BEGIN;\n> > SET ROLE user_A;\n> > INSERT INTO ft1 VALUES (1);\n> > SET ROLE user_B;\n> > INSERT INTO ft1 VALUES (1);\n> > COMMIT;\n> >\n> > Suppose that an FDW identifies the connections by {server OID, user\n> > OID} and the core GTM identifies the transactions by user mapping OID,\n> > and user_A and user_B use the public user mapping to connect server_X.\n> > In the FDW, there are two connections identified by {user_A, sever_X}\n> > and {user_B, server_X} respectively, and therefore opens two\n> > transactions on each connection, while GTM has only one FdwXact entry\n> > because the two connections refer to the same user mapping OID. As a\n> > result, at the end of the transaction, GTM ends only one foreign\n> > transaction, leaving another one.\n> >\n> > Using user mapping OID seems natural to me but I'm concerned that\n> > changing role in the middle of transaction is likely to happen than\n> > dropping the public user mapping but not sure. We would need to find\n> > more better way.\n>\n> After more thought, I'm inclined to think it's better to identify\n> foreign transactions by user mapping OID. The main reason is, I think\n> FDWs that manages connection caches by pair of user OID and server OID\n> potentially has a problem with the scenario Fujii-san mentioned. If an\n> FDW has to use another user mapping (i.g., connection information) due\n> to the currently used user mapping being removed, it would have to\n> disconnect the previous connection because it has to use the same\n> connection cache. But at that time it doesn't know the transaction\n> will be committed or aborted.\n>\n> Also, such FDW has the same problem that postgres_fdw used to have; a\n> backend establishes multiple connections with the same connection\n> information if multiple local users use the public user mapping. Even\n> from the perspective of foreign transaction management, it more makes\n> sense that foreign transactions correspond to the connections to\n> foreign servers, not to the local connection information.\n>\n> I can see that some FDW implementations such as mysql_fdw and\n> firebird_fdw identify connections by pair of server OID and user OID\n> but I think this is because they consulted to old postgres_fdw code. I\n> suspect that there is no use case where FDW needs to identify\n> connections in that way. If the core GTM identifies them by user\n> mapping OID, we could enforce those FDWs to change their way but I\n> think that change would be the right improvement.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n>\n>\nRegression is failing, can you please take a look.\n\nhttps://cirrus-ci.com/task/5522445932167168\n\n\nt/080_pg_isready.pl ....... ok\n# Failed test 'parallel reindexdb for system with --concurrently skips\ncatalogs status (got 1 vs expected 0)'\n# at t/090_reindexdb.pl line 191.\nBailout called. Further testing stopped: system pg_ctl failed\nFAILED--Further testing stopped: system pg_ctl failed\nmake[2]: *** [Makefile:57: check] Error 255\nmake[1]: *** [Makefile:43: check-scripts-recurse] Error 2\nmake: *** [GNUmakefile:71: check-world-src/bin-recurse] Error 2\n=== ./contrib/hstore_plperl/log/initdb.log ===\nRunning in no-clean mode. Mistakes will not be cleaned up.\nThe files belonging to this database system will be owned by user\n\"postgres\".\nThis user must also own the server process.\n--\n\n\n--\nIbrar Ahmed\n\nOn Thu, Feb 11, 2021 at 6:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Fri, Feb 5, 2021 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Feb 2, 2021 at 5:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2021/01/27 14:08, Masahiko Sawada wrote:\n> > > On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n> > > <masao.fujii@oss.nttdata.com> wrote:\n> > >>\n> > >>\n> > >> You fixed some issues. But maybe you forgot to attach the latest patches?\n> > >\n> > > Yes, I've attached the updated patches.\n> >\n> > Thanks for updating the patch! I tried to review 0001 and 0002 as the self-contained change.\n> >\n> > + * An FDW that implements both commit and rollback APIs can request to register\n> > + * the foreign transaction by FdwXactRegisterXact() to participate it to a\n> > + * group of distributed tranasction. The registered foreign transactions are\n> > + * identified by OIDs of server and user.\n> >\n> > I'm afraid that the combination of OIDs of server and user is not unique. IOW, more than one foreign transactions can have the same combination of OIDs of server and user. For example, the following two SELECT queries start the different foreign transactions but their user OID is the same. OID of user mapping should be used instead of OID of user?\n> >\n> > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n> > CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user 'postgres');\n> > CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user 'postgres');\n> > CREATE TABLE t(i int);\n> > CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS (table_name 't');\n> > BEGIN;\n> > SELECT * FROM ft;\n> > DROP USER MAPPING FOR postgres SERVER loopback ;\n> > SELECT * FROM ft;\n> > COMMIT;\n>\n> Good catch. I've considered using user mapping OID or a pair of user\n> mapping OID and server OID as a key of foreign transactions but I\n> think it also has a problem if an FDW caches the connection by pair of\n> server OID and user OID whereas the core identifies them by user\n> mapping OID. For instance, mysql_fdw manages connections by pair of\n> server OID and user OID.\n>\n> For example, let's consider the following execution:\n>\n> BEGIN;\n> SET ROLE user_A;\n> INSERT INTO ft1 VALUES (1);\n> SET ROLE user_B;\n> INSERT INTO ft1 VALUES (1);\n> COMMIT;\n>\n> Suppose that an FDW identifies the connections by {server OID, user\n> OID} and the core GTM identifies the transactions by user mapping OID,\n> and user_A and user_B use the public user mapping to connect server_X.\n> In the FDW, there are two connections identified by {user_A, sever_X}\n> and {user_B, server_X} respectively, and therefore opens two\n> transactions on each connection, while GTM has only one FdwXact entry\n> because the two connections refer to the same user mapping OID. As a\n> result, at the end of the transaction, GTM ends only one foreign\n> transaction, leaving another one.\n>\n> Using user mapping OID seems natural to me but I'm concerned that\n> changing role in the middle of transaction is likely to happen than\n> dropping the public user mapping but not sure. We would need to find\n> more better way.\n\nAfter more thought, I'm inclined to think it's better to identify\nforeign transactions by user mapping OID. The main reason is, I think\nFDWs that manages connection caches by pair of user OID and server OID\npotentially has a problem with the scenario Fujii-san mentioned. If an\nFDW has to use another user mapping (i.g., connection information) due\nto the currently used user mapping being removed, it would have to\ndisconnect the previous connection because it has to use the same\nconnection cache. But at that time it doesn't know the transaction\nwill be committed or aborted.\n\nAlso, such FDW has the same problem that postgres_fdw used to have; a\nbackend establishes multiple connections with the same connection\ninformation if multiple local users use the public user mapping. Even\nfrom the perspective of foreign transaction management, it more makes\nsense that foreign transactions correspond to the connections to\nforeign servers, not to the local connection information.\n\nI can see that some FDW implementations such as mysql_fdw and\nfirebird_fdw identify connections by pair of server OID and user OID\nbut I think this is because they consulted to old postgres_fdw code. I\nsuspect that there is no use case where FDW needs to identify\nconnections in that way. If the core GTM identifies them by user\nmapping OID, we could enforce those FDWs to change their way but I\nthink that change would be the right improvement.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\nRegression is failing, can you please take a look.https://cirrus-ci.com/task/5522445932167168 t/080_pg_isready.pl ....... ok# Failed test 'parallel reindexdb for system with --concurrently skips catalogs status (got 1 vs expected 0)'# at t/090_reindexdb.pl line 191.Bailout called. Further testing stopped: system pg_ctl failedFAILED--Further testing stopped: system pg_ctl failedmake[2]: *** [Makefile:57: check] Error 255make[1]: *** [Makefile:43: check-scripts-recurse] Error 2make: *** [GNUmakefile:71: check-world-src/bin-recurse] Error 2=== ./contrib/hstore_plperl/log/initdb.log ===Running in no-clean mode. Mistakes will not be cleaned up.The files belonging to this database system will be owned by user \"postgres\".This user must also own the server process.----Ibrar Ahmed",
"msg_date": "Sun, 14 Mar 2021 23:55:06 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 3:55 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Feb 11, 2021 at 6:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Fri, Feb 5, 2021 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >\n>> > On Tue, Feb 2, 2021 at 5:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> > >\n>> > >\n>> > >\n>> > > On 2021/01/27 14:08, Masahiko Sawada wrote:\n>> > > > On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n>> > > > <masao.fujii@oss.nttdata.com> wrote:\n>> > > >>\n>> > > >>\n>> > > >> You fixed some issues. But maybe you forgot to attach the latest patches?\n>> > > >\n>> > > > Yes, I've attached the updated patches.\n>> > >\n>> > > Thanks for updating the patch! I tried to review 0001 and 0002 as the self-contained change.\n>> > >\n>> > > + * An FDW that implements both commit and rollback APIs can request to register\n>> > > + * the foreign transaction by FdwXactRegisterXact() to participate it to a\n>> > > + * group of distributed tranasction. The registered foreign transactions are\n>> > > + * identified by OIDs of server and user.\n>> > >\n>> > > I'm afraid that the combination of OIDs of server and user is not unique. IOW, more than one foreign transactions can have the same combination of OIDs of server and user. For example, the following two SELECT queries start the different foreign transactions but their user OID is the same. OID of user mapping should be used instead of OID of user?\n>> > >\n>> > > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n>> > > CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user 'postgres');\n>> > > CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user 'postgres');\n>> > > CREATE TABLE t(i int);\n>> > > CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS (table_name 't');\n>> > > BEGIN;\n>> > > SELECT * FROM ft;\n>> > > DROP USER MAPPING FOR postgres SERVER loopback ;\n>> > > SELECT * FROM ft;\n>> > > COMMIT;\n>> >\n>> > Good catch. I've considered using user mapping OID or a pair of user\n>> > mapping OID and server OID as a key of foreign transactions but I\n>> > think it also has a problem if an FDW caches the connection by pair of\n>> > server OID and user OID whereas the core identifies them by user\n>> > mapping OID. For instance, mysql_fdw manages connections by pair of\n>> > server OID and user OID.\n>> >\n>> > For example, let's consider the following execution:\n>> >\n>> > BEGIN;\n>> > SET ROLE user_A;\n>> > INSERT INTO ft1 VALUES (1);\n>> > SET ROLE user_B;\n>> > INSERT INTO ft1 VALUES (1);\n>> > COMMIT;\n>> >\n>> > Suppose that an FDW identifies the connections by {server OID, user\n>> > OID} and the core GTM identifies the transactions by user mapping OID,\n>> > and user_A and user_B use the public user mapping to connect server_X.\n>> > In the FDW, there are two connections identified by {user_A, sever_X}\n>> > and {user_B, server_X} respectively, and therefore opens two\n>> > transactions on each connection, while GTM has only one FdwXact entry\n>> > because the two connections refer to the same user mapping OID. As a\n>> > result, at the end of the transaction, GTM ends only one foreign\n>> > transaction, leaving another one.\n>> >\n>> > Using user mapping OID seems natural to me but I'm concerned that\n>> > changing role in the middle of transaction is likely to happen than\n>> > dropping the public user mapping but not sure. We would need to find\n>> > more better way.\n>>\n>> After more thought, I'm inclined to think it's better to identify\n>> foreign transactions by user mapping OID. The main reason is, I think\n>> FDWs that manages connection caches by pair of user OID and server OID\n>> potentially has a problem with the scenario Fujii-san mentioned. If an\n>> FDW has to use another user mapping (i.g., connection information) due\n>> to the currently used user mapping being removed, it would have to\n>> disconnect the previous connection because it has to use the same\n>> connection cache. But at that time it doesn't know the transaction\n>> will be committed or aborted.\n>>\n>> Also, such FDW has the same problem that postgres_fdw used to have; a\n>> backend establishes multiple connections with the same connection\n>> information if multiple local users use the public user mapping. Even\n>> from the perspective of foreign transaction management, it more makes\n>> sense that foreign transactions correspond to the connections to\n>> foreign servers, not to the local connection information.\n>>\n>> I can see that some FDW implementations such as mysql_fdw and\n>> firebird_fdw identify connections by pair of server OID and user OID\n>> but I think this is because they consulted to old postgres_fdw code. I\n>> suspect that there is no use case where FDW needs to identify\n>> connections in that way. If the core GTM identifies them by user\n>> mapping OID, we could enforce those FDWs to change their way but I\n>> think that change would be the right improvement.\n>>\n>> Regards,\n>>\n>> --\n>> Masahiko Sawada\n>> EDB: https://www.enterprisedb.com/\n>>\n>>\n>\n> Regression is failing, can you please take a look.\n\nThank you!\n\nI've attached the updated version patch set.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 17 Mar 2021 12:03:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\nFor v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n\nWith this commit, the foreign server modified within the transaction marked\nas 'modified'.\n\ntransaction marked -> transaction is marked\n\n+#define IsForeignTwophaseCommitRequested() \\\n+ (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n\nSince the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro\nshould be named: IsForeignTwophaseCommitRequired.\n\n+static bool\n+checkForeignTwophaseCommitRequired(bool local_modified)\n\n+ if (!ServerSupportTwophaseCommit(fdw_part))\n+ have_no_twophase = true;\n...\n+ if (have_no_twophase)\n+ ereport(ERROR,\n\nIt seems the error case should be reported within the loop. This way, we\ndon't need to iterate the other participant(s).\nAccordingly, nserverswritten should be incremented for local server prior\nto the loop. The condition in the loop would become if\n(!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\nhave_no_twophase is no longer needed.\n\nCheers\n\nOn Tue, Mar 16, 2021 at 8:04 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, Mar 15, 2021 at 3:55 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >\n> >\n> >\n> > On Thu, Feb 11, 2021 at 6:25 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >> On Fri, Feb 5, 2021 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >> >\n> >> > On Tue, Feb 2, 2021 at 5:18 PM Fujii Masao <\n> masao.fujii@oss.nttdata.com> wrote:\n> >> > >\n> >> > >\n> >> > >\n> >> > > On 2021/01/27 14:08, Masahiko Sawada wrote:\n> >> > > > On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n> >> > > > <masao.fujii@oss.nttdata.com> wrote:\n> >> > > >>\n> >> > > >>\n> >> > > >> You fixed some issues. But maybe you forgot to attach the latest\n> patches?\n> >> > > >\n> >> > > > Yes, I've attached the updated patches.\n> >> > >\n> >> > > Thanks for updating the patch! I tried to review 0001 and 0002 as\n> the self-contained change.\n> >> > >\n> >> > > + * An FDW that implements both commit and rollback APIs can\n> request to register\n> >> > > + * the foreign transaction by FdwXactRegisterXact() to participate\n> it to a\n> >> > > + * group of distributed tranasction. The registered foreign\n> transactions are\n> >> > > + * identified by OIDs of server and user.\n> >> > >\n> >> > > I'm afraid that the combination of OIDs of server and user is not\n> unique. IOW, more than one foreign transactions can have the same\n> combination of OIDs of server and user. For example, the following two\n> SELECT queries start the different foreign transactions but their user OID\n> is the same. OID of user mapping should be used instead of OID of user?\n> >> > >\n> >> > > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n> >> > > CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user\n> 'postgres');\n> >> > > CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user\n> 'postgres');\n> >> > > CREATE TABLE t(i int);\n> >> > > CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS\n> (table_name 't');\n> >> > > BEGIN;\n> >> > > SELECT * FROM ft;\n> >> > > DROP USER MAPPING FOR postgres SERVER loopback ;\n> >> > > SELECT * FROM ft;\n> >> > > COMMIT;\n> >> >\n> >> > Good catch. I've considered using user mapping OID or a pair of user\n> >> > mapping OID and server OID as a key of foreign transactions but I\n> >> > think it also has a problem if an FDW caches the connection by pair of\n> >> > server OID and user OID whereas the core identifies them by user\n> >> > mapping OID. For instance, mysql_fdw manages connections by pair of\n> >> > server OID and user OID.\n> >> >\n> >> > For example, let's consider the following execution:\n> >> >\n> >> > BEGIN;\n> >> > SET ROLE user_A;\n> >> > INSERT INTO ft1 VALUES (1);\n> >> > SET ROLE user_B;\n> >> > INSERT INTO ft1 VALUES (1);\n> >> > COMMIT;\n> >> >\n> >> > Suppose that an FDW identifies the connections by {server OID, user\n> >> > OID} and the core GTM identifies the transactions by user mapping OID,\n> >> > and user_A and user_B use the public user mapping to connect server_X.\n> >> > In the FDW, there are two connections identified by {user_A, sever_X}\n> >> > and {user_B, server_X} respectively, and therefore opens two\n> >> > transactions on each connection, while GTM has only one FdwXact entry\n> >> > because the two connections refer to the same user mapping OID. As a\n> >> > result, at the end of the transaction, GTM ends only one foreign\n> >> > transaction, leaving another one.\n> >> >\n> >> > Using user mapping OID seems natural to me but I'm concerned that\n> >> > changing role in the middle of transaction is likely to happen than\n> >> > dropping the public user mapping but not sure. We would need to find\n> >> > more better way.\n> >>\n> >> After more thought, I'm inclined to think it's better to identify\n> >> foreign transactions by user mapping OID. The main reason is, I think\n> >> FDWs that manages connection caches by pair of user OID and server OID\n> >> potentially has a problem with the scenario Fujii-san mentioned. If an\n> >> FDW has to use another user mapping (i.g., connection information) due\n> >> to the currently used user mapping being removed, it would have to\n> >> disconnect the previous connection because it has to use the same\n> >> connection cache. But at that time it doesn't know the transaction\n> >> will be committed or aborted.\n> >>\n> >> Also, such FDW has the same problem that postgres_fdw used to have; a\n> >> backend establishes multiple connections with the same connection\n> >> information if multiple local users use the public user mapping. Even\n> >> from the perspective of foreign transaction management, it more makes\n> >> sense that foreign transactions correspond to the connections to\n> >> foreign servers, not to the local connection information.\n> >>\n> >> I can see that some FDW implementations such as mysql_fdw and\n> >> firebird_fdw identify connections by pair of server OID and user OID\n> >> but I think this is because they consulted to old postgres_fdw code. I\n> >> suspect that there is no use case where FDW needs to identify\n> >> connections in that way. If the core GTM identifies them by user\n> >> mapping OID, we could enforce those FDWs to change their way but I\n> >> think that change would be the right improvement.\n> >>\n> >> Regards,\n> >>\n> >> --\n> >> Masahiko Sawada\n> >> EDB: https://www.enterprisedb.com/\n> >>\n> >>\n> >\n> > Regression is failing, can you please take a look.\n>\n> Thank you!\n>\n> I've attached the updated version patch set.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\nHi,For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :With this commit, the foreign server modified within the transaction marked as 'modified'.transaction marked -> transaction is marked+#define IsForeignTwophaseCommitRequested() \\+ (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro should be named: IsForeignTwophaseCommitRequired.+static bool+checkForeignTwophaseCommitRequired(bool local_modified)+ if (!ServerSupportTwophaseCommit(fdw_part))+ have_no_twophase = true;...+ if (have_no_twophase)+ ereport(ERROR,It seems the error case should be reported within the loop. This way, we don't need to iterate the other participant(s).Accordingly, nserverswritten should be incremented for local server prior to the loop. The condition in the loop would become if (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).have_no_twophase is no longer needed.CheersOn Tue, Mar 16, 2021 at 8:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, Mar 15, 2021 at 3:55 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Feb 11, 2021 at 6:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Fri, Feb 5, 2021 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >\n>> > On Tue, Feb 2, 2021 at 5:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> > >\n>> > >\n>> > >\n>> > > On 2021/01/27 14:08, Masahiko Sawada wrote:\n>> > > > On Wed, Jan 27, 2021 at 10:29 AM Fujii Masao\n>> > > > <masao.fujii@oss.nttdata.com> wrote:\n>> > > >>\n>> > > >>\n>> > > >> You fixed some issues. But maybe you forgot to attach the latest patches?\n>> > > >\n>> > > > Yes, I've attached the updated patches.\n>> > >\n>> > > Thanks for updating the patch! I tried to review 0001 and 0002 as the self-contained change.\n>> > >\n>> > > + * An FDW that implements both commit and rollback APIs can request to register\n>> > > + * the foreign transaction by FdwXactRegisterXact() to participate it to a\n>> > > + * group of distributed tranasction. The registered foreign transactions are\n>> > > + * identified by OIDs of server and user.\n>> > >\n>> > > I'm afraid that the combination of OIDs of server and user is not unique. IOW, more than one foreign transactions can have the same combination of OIDs of server and user. For example, the following two SELECT queries start the different foreign transactions but their user OID is the same. OID of user mapping should be used instead of OID of user?\n>> > >\n>> > > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw;\n>> > > CREATE USER MAPPING FOR postgres SERVER loopback OPTIONS (user 'postgres');\n>> > > CREATE USER MAPPING FOR public SERVER loopback OPTIONS (user 'postgres');\n>> > > CREATE TABLE t(i int);\n>> > > CREATE FOREIGN TABLE ft(i int) SERVER loopback OPTIONS (table_name 't');\n>> > > BEGIN;\n>> > > SELECT * FROM ft;\n>> > > DROP USER MAPPING FOR postgres SERVER loopback ;\n>> > > SELECT * FROM ft;\n>> > > COMMIT;\n>> >\n>> > Good catch. I've considered using user mapping OID or a pair of user\n>> > mapping OID and server OID as a key of foreign transactions but I\n>> > think it also has a problem if an FDW caches the connection by pair of\n>> > server OID and user OID whereas the core identifies them by user\n>> > mapping OID. For instance, mysql_fdw manages connections by pair of\n>> > server OID and user OID.\n>> >\n>> > For example, let's consider the following execution:\n>> >\n>> > BEGIN;\n>> > SET ROLE user_A;\n>> > INSERT INTO ft1 VALUES (1);\n>> > SET ROLE user_B;\n>> > INSERT INTO ft1 VALUES (1);\n>> > COMMIT;\n>> >\n>> > Suppose that an FDW identifies the connections by {server OID, user\n>> > OID} and the core GTM identifies the transactions by user mapping OID,\n>> > and user_A and user_B use the public user mapping to connect server_X.\n>> > In the FDW, there are two connections identified by {user_A, sever_X}\n>> > and {user_B, server_X} respectively, and therefore opens two\n>> > transactions on each connection, while GTM has only one FdwXact entry\n>> > because the two connections refer to the same user mapping OID. As a\n>> > result, at the end of the transaction, GTM ends only one foreign\n>> > transaction, leaving another one.\n>> >\n>> > Using user mapping OID seems natural to me but I'm concerned that\n>> > changing role in the middle of transaction is likely to happen than\n>> > dropping the public user mapping but not sure. We would need to find\n>> > more better way.\n>>\n>> After more thought, I'm inclined to think it's better to identify\n>> foreign transactions by user mapping OID. The main reason is, I think\n>> FDWs that manages connection caches by pair of user OID and server OID\n>> potentially has a problem with the scenario Fujii-san mentioned. If an\n>> FDW has to use another user mapping (i.g., connection information) due\n>> to the currently used user mapping being removed, it would have to\n>> disconnect the previous connection because it has to use the same\n>> connection cache. But at that time it doesn't know the transaction\n>> will be committed or aborted.\n>>\n>> Also, such FDW has the same problem that postgres_fdw used to have; a\n>> backend establishes multiple connections with the same connection\n>> information if multiple local users use the public user mapping. Even\n>> from the perspective of foreign transaction management, it more makes\n>> sense that foreign transactions correspond to the connections to\n>> foreign servers, not to the local connection information.\n>>\n>> I can see that some FDW implementations such as mysql_fdw and\n>> firebird_fdw identify connections by pair of server OID and user OID\n>> but I think this is because they consulted to old postgres_fdw code. I\n>> suspect that there is no use case where FDW needs to identify\n>> connections in that way. If the core GTM identifies them by user\n>> mapping OID, we could enforce those FDWs to change their way but I\n>> think that change would be the right improvement.\n>>\n>> Regards,\n>>\n>> --\n>> Masahiko Sawada\n>> EDB: https://www.enterprisedb.com/\n>>\n>>\n>\n> Regression is failing, can you please take a look.\n\nThank you!\n\nI've attached the updated version patch set.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 17 Mar 2021 02:06:19 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/03/17 12:03, Masahiko Sawada wrote:\n> I've attached the updated version patch set.\n\nThanks for updating the patches! I'm now restarting to review of 2PC because\nI'd like to use this feature in PG15.\n\n\nI think the following logic of resolving and removing the fdwxact entries\nby the transaction resolver needs to be fixed.\n\n1. check if pending fdwxact entries exist\n\nHoldInDoubtFdwXacts() checks if there are entries which the condition is\nInvalidBackendId and so on. After that it gets the indexes of the fdwxacts\narray. The fdwXactLock is released at the end of this phase.\n\n2. resolve and remove the entries held in 1th phase.\n\nResolveFdwXacts() resloves the status per each fdwxact entry using the\nindexes. The end of resolving, the transaction resolver remove the entry in\nfdwxacts array via remove_fdwact().\n\nThe way to remove the entry is the following. Since to control using the\nindex, the indexes of getting in the 1st phase are meaningless anymore.\n\n/* Remove the entry from active array */\nFdwXactCtl->num_fdwxacts--;\nFdwXactCtl->fdwxacts[i] = FdwXactCtl->fdwxacts[FdwXactCtl->num_fdwxacts];\n\nThis seems to lead resolving the unexpected fdwxacts and it can occur the\nfollowing assertion error. That's why I noticed. For example, there is the\ncase which a backend inserts new fdwxact entry in the free space, which the\nresolver removed the entry right before, and the resolver accesses the new\nentry which doesn't need to resolve yet because it use the indexes checked in\n1st phase.\n\nAssert(fdwxact->locking_backend == MyBackendId);\n\n\n\nThe simple solution is that to get fdwXactLock exclusive all the time from the\nbegining of 1st phase to the finishing of 2nd phase. But, I worried that the\nperformance impact became too big...\n\nI came up with two solutions although there may be better solutions.\n\nA. to remove resolved entries at once after resolution for all held entries is\nfinished\n\nIf so, we don't need to take the exclusive lock for a long time. But, this\nhave other problems, which pg_remove_foreign_xact() can still remove entries\nand we need to handle the fail of resolving.\n\nI wondered that we can solve the first problem to introduce a new lock like\n\"removing lock\" and only the processes which hold the lock can remove the\nentries. The performance impact is limited since the insertion the fdwxact\nentries is not blocked by this lock. And second problem can be solved using\ntry-catch sentence.\n\n\nB. to merge 1st and 2nd phase\n\nNow, the resolver resolves the entries together. That's the reason why it's\ndifficult to remove the entries. So, it seems to solve the problem to execute\nchecking, resolving and removing per each entry. I think it's better since\nthis is simpler than A. If I'm missing something, please let me know.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 27 Apr 2021 10:03:20 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 10:03 AM Masahiro Ikeda\n<ikedamsh@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/17 12:03, Masahiko Sawada wrote:\n> > I've attached the updated version patch set.\n>\n> Thanks for updating the patches! I'm now restarting to review of 2PC because\n> I'd like to use this feature in PG15.\n\nThank you for reviewing the patch! Much appreciated.\n\n>\n>\n> I think the following logic of resolving and removing the fdwxact entries\n> by the transaction resolver needs to be fixed.\n>\n> 1. check if pending fdwxact entries exist\n>\n> HoldInDoubtFdwXacts() checks if there are entries which the condition is\n> InvalidBackendId and so on. After that it gets the indexes of the fdwxacts\n> array. The fdwXactLock is released at the end of this phase.\n>\n> 2. resolve and remove the entries held in 1th phase.\n>\n> ResolveFdwXacts() resloves the status per each fdwxact entry using the\n> indexes. The end of resolving, the transaction resolver remove the entry in\n> fdwxacts array via remove_fdwact().\n>\n> The way to remove the entry is the following. Since to control using the\n> index, the indexes of getting in the 1st phase are meaningless anymore.\n>\n> /* Remove the entry from active array */\n> FdwXactCtl->num_fdwxacts--;\n> FdwXactCtl->fdwxacts[i] = FdwXactCtl->fdwxacts[FdwXactCtl->num_fdwxacts];\n>\n> This seems to lead resolving the unexpected fdwxacts and it can occur the\n> following assertion error. That's why I noticed. For example, there is the\n> case which a backend inserts new fdwxact entry in the free space, which the\n> resolver removed the entry right before, and the resolver accesses the new\n> entry which doesn't need to resolve yet because it use the indexes checked in\n> 1st phase.\n>\n> Assert(fdwxact->lockeing_backend == MyBackendId);\n\nGood point. I agree with your analysis.\n\n>\n>\n>\n> The simple solution is that to get fdwXactLock exclusive all the time from the\n> begining of 1st phase to the finishing of 2nd phase. But, I worried that the\n> performance impact became too big...\n>\n> I came up with two solutions although there may be better solutions.\n>\n> A. to remove resolved entries at once after resolution for all held entries is\n> finished\n>\n> If so, we don't need to take the exclusive lock for a long time. But, this\n> have other problems, which pg_remove_foreign_xact() can still remove entries\n> and we need to handle the fail of resolving.\n>\n> I wondered that we can solve the first problem to introduce a new lock like\n> \"removing lock\" and only the processes which hold the lock can remove the\n> entries. The performance impact is limited since the insertion the fdwxact\n> entries is not blocked by this lock. And second problem can be solved using\n> try-catch sentence.\n>\n>\n> B. to merge 1st and 2nd phase\n>\n> Now, the resolver resolves the entries together. That's the reason why it's\n> difficult to remove the entries. So, it seems to solve the problem to execute\n> checking, resolving and removing per each entry. I think it's better since\n> this is simpler than A. If I'm missing something, please let me know.\n\nIt seems to me that solution B would be simpler and better. I'll try\nto fix this issue by using solution B and rebase the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 1 May 2021 09:22:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n\nThank you for reviewing the patch!\n\n>\n> With this commit, the foreign server modified within the transaction marked as 'modified'.\n>\n> transaction marked -> transaction is marked\n\nWill fix.\n\n>\n> +#define IsForeignTwophaseCommitRequested() \\\n> + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n>\n> Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro should be named: IsForeignTwophaseCommitRequired.\n\nBut even if foreign_twophase_commit is\nFOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\nthere is only one modified server, right? It seems the name\nIsForeignTwophaseCommitRequested is fine.\n\n>\n> +static bool\n> +checkForeignTwophaseCommitRequired(bool local_modified)\n>\n> + if (!ServerSupportTwophaseCommit(fdw_part))\n> + have_no_twophase = true;\n> ...\n> + if (have_no_twophase)\n> + ereport(ERROR,\n>\n> It seems the error case should be reported within the loop. This way, we don't need to iterate the other participant(s).\n> Accordingly, nserverswritten should be incremented for local server prior to the loop. The condition in the loop would become if (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n> have_no_twophase is no longer needed.\n\nHmm, I think If we process one 2pc-non-capable server first and then\nprocess another one 2pc-capable server, we should raise an error but\ncannot detect that.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 1 May 2021 13:09:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n>\n> Thank you for reviewing the patch!\n>\n> >\n> > With this commit, the foreign server modified within the transaction\n> marked as 'modified'.\n> >\n> > transaction marked -> transaction is marked\n>\n> Will fix.\n>\n> >\n> > +#define IsForeignTwophaseCommitRequested() \\\n> > + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n> >\n> > Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the\n> macro should be named: IsForeignTwophaseCommitRequired.\n>\n> But even if foreign_twophase_commit is\n> FOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\n> there is only one modified server, right? It seems the name\n> IsForeignTwophaseCommitRequested is fine.\n>\n> >\n> > +static bool\n> > +checkForeignTwophaseCommitRequired(bool local_modified)\n> >\n> > + if (!ServerSupportTwophaseCommit(fdw_part))\n> > + have_no_twophase = true;\n> > ...\n> > + if (have_no_twophase)\n> > + ereport(ERROR,\n> >\n> > It seems the error case should be reported within the loop. This way, we\n> don't need to iterate the other participant(s).\n> > Accordingly, nserverswritten should be incremented for local server\n> prior to the loop. The condition in the loop would become if\n> (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n> > have_no_twophase is no longer needed.\n>\n> Hmm, I think If we process one 2pc-non-capable server first and then\n> process another one 2pc-capable server, we should raise an error but\n> cannot detect that.\n>\n\nThen the check would stay as what you have in the patch:\n\n if (!ServerSupportTwophaseCommit(fdw_part))\n\nWhen the non-2pc-capable server is encountered, we would report the error\nin place (following the ServerSupportTwophaseCommit check) and come out of\nthe loop.\nhave_no_twophase can be dropped.\n\nThanks\n\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\nOn Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n\nThank you for reviewing the patch!\n\n>\n> With this commit, the foreign server modified within the transaction marked as 'modified'.\n>\n> transaction marked -> transaction is marked\n\nWill fix.\n\n>\n> +#define IsForeignTwophaseCommitRequested() \\\n> + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n>\n> Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro should be named: IsForeignTwophaseCommitRequired.\n\nBut even if foreign_twophase_commit is\nFOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\nthere is only one modified server, right? It seems the name\nIsForeignTwophaseCommitRequested is fine.\n\n>\n> +static bool\n> +checkForeignTwophaseCommitRequired(bool local_modified)\n>\n> + if (!ServerSupportTwophaseCommit(fdw_part))\n> + have_no_twophase = true;\n> ...\n> + if (have_no_twophase)\n> + ereport(ERROR,\n>\n> It seems the error case should be reported within the loop. This way, we don't need to iterate the other participant(s).\n> Accordingly, nserverswritten should be incremented for local server prior to the loop. The condition in the loop would become if (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n> have_no_twophase is no longer needed.\n\nHmm, I think If we process one 2pc-non-capable server first and then\nprocess another one 2pc-capable server, we should raise an error but\ncannot detect that.Then the check would stay as what you have in the patch: if (!ServerSupportTwophaseCommit(fdw_part))When the non-2pc-capable server is encountered, we would report the error in place (following the ServerSupportTwophaseCommit check) and come out of the loop.have_no_twophase can be dropped.Thanks \n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Sat, 1 May 2021 09:27:16 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sun, May 2, 2021 at 1:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>> > For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n>>\n>> Thank you for reviewing the patch!\n>>\n>> >\n>> > With this commit, the foreign server modified within the transaction marked as 'modified'.\n>> >\n>> > transaction marked -> transaction is marked\n>>\n>> Will fix.\n>>\n>> >\n>> > +#define IsForeignTwophaseCommitRequested() \\\n>> > + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n>> >\n>> > Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro should be named: IsForeignTwophaseCommitRequired.\n>>\n>> But even if foreign_twophase_commit is\n>> FOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\n>> there is only one modified server, right? It seems the name\n>> IsForeignTwophaseCommitRequested is fine.\n>>\n>> >\n>> > +static bool\n>> > +checkForeignTwophaseCommitRequired(bool local_modified)\n>> >\n>> > + if (!ServerSupportTwophaseCommit(fdw_part))\n>> > + have_no_twophase = true;\n>> > ...\n>> > + if (have_no_twophase)\n>> > + ereport(ERROR,\n>> >\n>> > It seems the error case should be reported within the loop. This way, we don't need to iterate the other participant(s).\n>> > Accordingly, nserverswritten should be incremented for local server prior to the loop. The condition in the loop would become if (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n>> > have_no_twophase is no longer needed.\n>>\n>> Hmm, I think If we process one 2pc-non-capable server first and then\n>> process another one 2pc-capable server, we should raise an error but\n>> cannot detect that.\n>\n>\n> Then the check would stay as what you have in the patch:\n>\n> if (!ServerSupportTwophaseCommit(fdw_part))\n>\n> When the non-2pc-capable server is encountered, we would report the error in place (following the ServerSupportTwophaseCommit check) and come out of the loop.\n> have_no_twophase can be dropped.\n\nBut if we processed only one non-2pc-capable server, we would raise an\nerror but should not in that case.\n\nOn second thought, I think we can track how many servers are modified\nor not capable of 2PC during registration and unr-egistration. Then we\ncan consider both 2PC is required and there is non-2pc-capable server\nis involved without looking through all participants. Thoughts?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 3 May 2021 21:24:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, May 3, 2021 at 5:25 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Sun, May 2, 2021 at 1:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> >\n> >\n> > On Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >> On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> >\n> >> > Hi,\n> >> > For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n> >>\n> >> Thank you for reviewing the patch!\n> >>\n> >> >\n> >> > With this commit, the foreign server modified within the transaction\n> marked as 'modified'.\n> >> >\n> >> > transaction marked -> transaction is marked\n> >>\n> >> Will fix.\n> >>\n> >> >\n> >> > +#define IsForeignTwophaseCommitRequested() \\\n> >> > + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n> >> >\n> >> > Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the\n> macro should be named: IsForeignTwophaseCommitRequired.\n> >>\n> >> But even if foreign_twophase_commit is\n> >> FOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\n> >> there is only one modified server, right? It seems the name\n> >> IsForeignTwophaseCommitRequested is fine.\n> >>\n> >> >\n> >> > +static bool\n> >> > +checkForeignTwophaseCommitRequired(bool local_modified)\n> >> >\n> >> > + if (!ServerSupportTwophaseCommit(fdw_part))\n> >> > + have_no_twophase = true;\n> >> > ...\n> >> > + if (have_no_twophase)\n> >> > + ereport(ERROR,\n> >> >\n> >> > It seems the error case should be reported within the loop. This way,\n> we don't need to iterate the other participant(s).\n> >> > Accordingly, nserverswritten should be incremented for local server\n> prior to the loop. The condition in the loop would become if\n> (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n> >> > have_no_twophase is no longer needed.\n> >>\n> >> Hmm, I think If we process one 2pc-non-capable server first and then\n> >> process another one 2pc-capable server, we should raise an error but\n> >> cannot detect that.\n> >\n> >\n> > Then the check would stay as what you have in the patch:\n> >\n> > if (!ServerSupportTwophaseCommit(fdw_part))\n> >\n> > When the non-2pc-capable server is encountered, we would report the\n> error in place (following the ServerSupportTwophaseCommit check) and come\n> out of the loop.\n> > have_no_twophase can be dropped.\n>\n> But if we processed only one non-2pc-capable server, we would raise an\n> error but should not in that case.\n>\n> On second thought, I think we can track how many servers are modified\n> or not capable of 2PC during registration and unr-egistration. Then we\n> can consider both 2PC is required and there is non-2pc-capable server\n> is involved without looking through all participants. Thoughts?\n>\n\nThat is something worth trying.\n\nThanks\n\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\nOn Mon, May 3, 2021 at 5:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Sun, May 2, 2021 at 1:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>> > For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n>>\n>> Thank you for reviewing the patch!\n>>\n>> >\n>> > With this commit, the foreign server modified within the transaction marked as 'modified'.\n>> >\n>> > transaction marked -> transaction is marked\n>>\n>> Will fix.\n>>\n>> >\n>> > +#define IsForeignTwophaseCommitRequested() \\\n>> > + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n>> >\n>> > Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro should be named: IsForeignTwophaseCommitRequired.\n>>\n>> But even if foreign_twophase_commit is\n>> FOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\n>> there is only one modified server, right? It seems the name\n>> IsForeignTwophaseCommitRequested is fine.\n>>\n>> >\n>> > +static bool\n>> > +checkForeignTwophaseCommitRequired(bool local_modified)\n>> >\n>> > + if (!ServerSupportTwophaseCommit(fdw_part))\n>> > + have_no_twophase = true;\n>> > ...\n>> > + if (have_no_twophase)\n>> > + ereport(ERROR,\n>> >\n>> > It seems the error case should be reported within the loop. This way, we don't need to iterate the other participant(s).\n>> > Accordingly, nserverswritten should be incremented for local server prior to the loop. The condition in the loop would become if (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n>> > have_no_twophase is no longer needed.\n>>\n>> Hmm, I think If we process one 2pc-non-capable server first and then\n>> process another one 2pc-capable server, we should raise an error but\n>> cannot detect that.\n>\n>\n> Then the check would stay as what you have in the patch:\n>\n> if (!ServerSupportTwophaseCommit(fdw_part))\n>\n> When the non-2pc-capable server is encountered, we would report the error in place (following the ServerSupportTwophaseCommit check) and come out of the loop.\n> have_no_twophase can be dropped.\n\nBut if we processed only one non-2pc-capable server, we would raise an\nerror but should not in that case.\n\nOn second thought, I think we can track how many servers are modified\nor not capable of 2PC during registration and unr-egistration. Then we\ncan consider both 2PC is required and there is non-2pc-capable server\nis involved without looking through all participants. Thoughts?That is something worth trying.Thanks \n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 3 May 2021 07:15:21 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, May 3, 2021 at 11:11 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Mon, May 3, 2021 at 5:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Sun, May 2, 2021 at 1:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> >\n>> >\n>> > On Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >>\n>> >> On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >> >\n>> >> > Hi,\n>> >> > For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n>> >>\n>> >> Thank you for reviewing the patch!\n>> >>\n>> >> >\n>> >> > With this commit, the foreign server modified within the transaction marked as 'modified'.\n>> >> >\n>> >> > transaction marked -> transaction is marked\n>> >>\n>> >> Will fix.\n>> >>\n>> >> >\n>> >> > +#define IsForeignTwophaseCommitRequested() \\\n>> >> > + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n>> >> >\n>> >> > Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro should be named: IsForeignTwophaseCommitRequired.\n>> >>\n>> >> But even if foreign_twophase_commit is\n>> >> FOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\n>> >> there is only one modified server, right? It seems the name\n>> >> IsForeignTwophaseCommitRequested is fine.\n>> >>\n>> >> >\n>> >> > +static bool\n>> >> > +checkForeignTwophaseCommitRequired(bool local_modified)\n>> >> >\n>> >> > + if (!ServerSupportTwophaseCommit(fdw_part))\n>> >> > + have_no_twophase = true;\n>> >> > ...\n>> >> > + if (have_no_twophase)\n>> >> > + ereport(ERROR,\n>> >> >\n>> >> > It seems the error case should be reported within the loop. This way, we don't need to iterate the other participant(s).\n>> >> > Accordingly, nserverswritten should be incremented for local server prior to the loop. The condition in the loop would become if (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n>> >> > have_no_twophase is no longer needed.\n>> >>\n>> >> Hmm, I think If we process one 2pc-non-capable server first and then\n>> >> process another one 2pc-capable server, we should raise an error but\n>> >> cannot detect that.\n>> >\n>> >\n>> > Then the check would stay as what you have in the patch:\n>> >\n>> > if (!ServerSupportTwophaseCommit(fdw_part))\n>> >\n>> > When the non-2pc-capable server is encountered, we would report the error in place (following the ServerSupportTwophaseCommit check) and come out of the loop.\n>> > have_no_twophase can be dropped.\n>>\n>> But if we processed only one non-2pc-capable server, we would raise an\n>> error but should not in that case.\n>>\n>> On second thought, I think we can track how many servers are modified\n>> or not capable of 2PC during registration and unr-egistration. Then we\n>> can consider both 2PC is required and there is non-2pc-capable server\n>> is involved without looking through all participants. Thoughts?\n>\n>\n> That is something worth trying.\n>\n\nI've attached the updated patches that incorporated comments from\nZhihong and Ikeda-san.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 11 May 2021 13:37:24 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Mon, May 10, 2021 at 9:38 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, May 3, 2021 at 11:11 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> >\n> >\n> > On Mon, May 3, 2021 at 5:25 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >> On Sun, May 2, 2021 at 1:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> >\n> >> >\n> >> >\n> >> > On Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <\n> sawada.mshk@gmail.com> wrote:\n> >> >>\n> >> >> On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> >> >\n> >> >> > Hi,\n> >> >> > For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n> >> >>\n> >> >> Thank you for reviewing the patch!\n> >> >>\n> >> >> >\n> >> >> > With this commit, the foreign server modified within the\n> transaction marked as 'modified'.\n> >> >> >\n> >> >> > transaction marked -> transaction is marked\n> >> >>\n> >> >> Will fix.\n> >> >>\n> >> >> >\n> >> >> > +#define IsForeignTwophaseCommitRequested() \\\n> >> >> > + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n> >> >> >\n> >> >> > Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think\n> the macro should be named: IsForeignTwophaseCommitRequired.\n> >> >>\n> >> >> But even if foreign_twophase_commit is\n> >> >> FOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\n> >> >> there is only one modified server, right? It seems the name\n> >> >> IsForeignTwophaseCommitRequested is fine.\n> >> >>\n> >> >> >\n> >> >> > +static bool\n> >> >> > +checkForeignTwophaseCommitRequired(bool local_modified)\n> >> >> >\n> >> >> > + if (!ServerSupportTwophaseCommit(fdw_part))\n> >> >> > + have_no_twophase = true;\n> >> >> > ...\n> >> >> > + if (have_no_twophase)\n> >> >> > + ereport(ERROR,\n> >> >> >\n> >> >> > It seems the error case should be reported within the loop. This\n> way, we don't need to iterate the other participant(s).\n> >> >> > Accordingly, nserverswritten should be incremented for local\n> server prior to the loop. The condition in the loop would become if\n> (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n> >> >> > have_no_twophase is no longer needed.\n> >> >>\n> >> >> Hmm, I think If we process one 2pc-non-capable server first and then\n> >> >> process another one 2pc-capable server, we should raise an error but\n> >> >> cannot detect that.\n> >> >\n> >> >\n> >> > Then the check would stay as what you have in the patch:\n> >> >\n> >> > if (!ServerSupportTwophaseCommit(fdw_part))\n> >> >\n> >> > When the non-2pc-capable server is encountered, we would report the\n> error in place (following the ServerSupportTwophaseCommit check) and come\n> out of the loop.\n> >> > have_no_twophase can be dropped.\n> >>\n> >> But if we processed only one non-2pc-capable server, we would raise an\n> >> error but should not in that case.\n> >>\n> >> On second thought, I think we can track how many servers are modified\n> >> or not capable of 2PC during registration and unr-egistration. Then we\n> >> can consider both 2PC is required and there is non-2pc-capable server\n> >> is involved without looking through all participants. Thoughts?\n> >\n> >\n> > That is something worth trying.\n> >\n>\n> I've attached the updated patches that incorporated comments from\n> Zhihong and Ikeda-san.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n\n\nHi,\nFor v36-0005-Prepare-foreign-transactions-at-commit-time.patch :\n\nWith this commit, the foreign server modified within the transaction\nmarked as 'modified'.\n\nThe verb is missing from the above sentence. 'within the transaction marked\n' -> within the transaction is marked\n\n+ /* true if modified the data on the server */\n\nmodified the data -> data is modified\n\n+ xid = GetTopTransactionIdIfAny();\n...\n+ if (!TransactionIdIsValid(xid))\n+ xid = GetTopTransactionId();\n\nI wonder when the above if condition is true, would\nthe GetTopTransactionId() get valid xid ? It seems the two func calls are\nthe same.\n\nI like the way checkForeignTwophaseCommitRequired() is structured.\n\nCheers\n\nOn Mon, May 10, 2021 at 9:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, May 3, 2021 at 11:11 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Mon, May 3, 2021 at 5:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Sun, May 2, 2021 at 1:23 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> >\n>> >\n>> > On Fri, Apr 30, 2021 at 9:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >>\n>> >> On Wed, Mar 17, 2021 at 6:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >> >\n>> >> > Hi,\n>> >> > For v35-0007-Prepare-foreign-transactions-at-commit-time.patch :\n>> >>\n>> >> Thank you for reviewing the patch!\n>> >>\n>> >> >\n>> >> > With this commit, the foreign server modified within the transaction marked as 'modified'.\n>> >> >\n>> >> > transaction marked -> transaction is marked\n>> >>\n>> >> Will fix.\n>> >>\n>> >> >\n>> >> > +#define IsForeignTwophaseCommitRequested() \\\n>> >> > + (foreign_twophase_commit > FOREIGN_TWOPHASE_COMMIT_DISABLED)\n>> >> >\n>> >> > Since the other enum is FOREIGN_TWOPHASE_COMMIT_REQUIRED, I think the macro should be named: IsForeignTwophaseCommitRequired.\n>> >>\n>> >> But even if foreign_twophase_commit is\n>> >> FOREIGN_TWOPHASE_COMMIT_REQUIRED, the two-phase commit is not used if\n>> >> there is only one modified server, right? It seems the name\n>> >> IsForeignTwophaseCommitRequested is fine.\n>> >>\n>> >> >\n>> >> > +static bool\n>> >> > +checkForeignTwophaseCommitRequired(bool local_modified)\n>> >> >\n>> >> > + if (!ServerSupportTwophaseCommit(fdw_part))\n>> >> > + have_no_twophase = true;\n>> >> > ...\n>> >> > + if (have_no_twophase)\n>> >> > + ereport(ERROR,\n>> >> >\n>> >> > It seems the error case should be reported within the loop. This way, we don't need to iterate the other participant(s).\n>> >> > Accordingly, nserverswritten should be incremented for local server prior to the loop. The condition in the loop would become if (!ServerSupportTwophaseCommit(fdw_part) && nserverswritten > 1).\n>> >> > have_no_twophase is no longer needed.\n>> >>\n>> >> Hmm, I think If we process one 2pc-non-capable server first and then\n>> >> process another one 2pc-capable server, we should raise an error but\n>> >> cannot detect that.\n>> >\n>> >\n>> > Then the check would stay as what you have in the patch:\n>> >\n>> > if (!ServerSupportTwophaseCommit(fdw_part))\n>> >\n>> > When the non-2pc-capable server is encountered, we would report the error in place (following the ServerSupportTwophaseCommit check) and come out of the loop.\n>> > have_no_twophase can be dropped.\n>>\n>> But if we processed only one non-2pc-capable server, we would raise an\n>> error but should not in that case.\n>>\n>> On second thought, I think we can track how many servers are modified\n>> or not capable of 2PC during registration and unr-egistration. Then we\n>> can consider both 2PC is required and there is non-2pc-capable server\n>> is involved without looking through all participants. Thoughts?\n>\n>\n> That is something worth trying.\n>\n\nI've attached the updated patches that incorporated comments from\nZhihong and Ikeda-san.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/Hi,For v36-0005-Prepare-foreign-transactions-at-commit-time.patch :With this commit, the foreign server modified within the transactionmarked as 'modified'.The verb is missing from the above sentence. 'within the transaction marked ' -> within the transaction is marked + /* true if modified the data on the server */modified the data -> data is modified+ xid = GetTopTransactionIdIfAny();...+ if (!TransactionIdIsValid(xid))+ xid = GetTopTransactionId();I wonder when the above if condition is true, would the GetTopTransactionId() get valid xid ? It seems the two func calls are the same.I like the way checkForeignTwophaseCommitRequired() is structured.Cheers",
"msg_date": "Wed, 12 May 2021 09:44:28 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\nOn 2021/05/11 13:37, Masahiko Sawada wrote:\n> I've attached the updated patches that incorporated comments from\n> Zhihong and Ikeda-san.\n\nThanks for updating the patches!\n\n\nI have other comments including trivial things.\n\n\na. about \"foreign_transaction_resolver_timeout\" parameter\n\nNow, the default value of \"foreign_transaction_resolver_timeout\" is 60 secs.\nIs there any reason? Although the following is minor case, it may confuse some\nusers.\n\nExample case is that\n\n1. a client executes transaction with 2PC when the resolver is processing\nFdwXactResolverProcessInDoubtXacts().\n\n2. the resolution of 1st transaction must be waited until the other\ntransactions for 2pc are executed or timeout.\n\n3. if the client check the 1st result value, it should wait until resolution\nis finished for atomic visibility (although it depends on the way how to\nrealize atomic visibility.) The clients may be waited\nforeign_transaction_resolver_timeout\". Users may think it's stale.\n\nLike this situation can be observed after testing with pgbench. Some\nunresolved transaction remains after benchmarking.\n\nI assume that this default value refers to wal_sender, archiver, and so on.\nBut, I think this parameter is more like \"commit_delay\". If so, 60 seconds\nseems to be big value.\n\n\nb. about performance bottleneck (just share my simple benchmark results)\n\nThe resolver process can be performance bottleneck easily although I think\nsome users want this feature even if the performance is not so good.\n\nI tested with very simple workload in my laptop.\n\nThe test condition is\n* two remote foreign partitions and one transaction inserts an entry in each\npartitions.\n* local connection only. If NW latency became higher, the performance became\nworse.\n* pgbench with 8 clients.\n\nThe test results is the following. The performance of 2PC is only 10%\nperformance of the one of without 2PC.\n\n* with foreign_twophase_commit = requried\n-> If load with more than 10TPS, the number of unresolved foreign transactions\nis increasing and stop with the warning \"Increase\nmax_prepared_foreign_transactions\".\n\n* with foreign_twophase_commit = disabled\n-> 122TPS in my environments.\n\n\nc. v36-0001-Introduce-transaction-manager-for-foreign-transa.patch\n\n* typo: s/tranasction/transaction/\n\n* Is it better to move AtEOXact_FdwXact() in AbortTransaction() to before \"if\n(IsInParallelMode())\" because make them in the same order as CommitTransaction()?\n\n* functions name of fdwxact.c\n\nAlthough this depends on my feeling, xact means transaction. If this feeling\nsame as you, the function names of FdwXactRegisterXact and so on are odd to\nme. FdwXactRegisterEntry or FdwXactRegisterParticipant is better?\n\n* Are the following better?\n\n- s/to register the foreign transaction by/to register the foreign transaction\nparticipant by/\n\n- s/The registered foreign transactions/The registered participants/\n\n- s/given foreign transaction/given foreign transaction participant/\n\n- s/Foreign transactions involved in the current transaction/Foreign\ntransaction participants involved in the current transaction/\n\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 20 May 2021 13:26:09 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n>\n> On 2021/05/11 13:37, Masahiko Sawada wrote:\n> > I've attached the updated patches that incorporated comments from\n> > Zhihong and Ikeda-san.\n>\n> Thanks for updating the patches!\n>\n>\n> I have other comments including trivial things.\n>\n>\n> a. about \"foreign_transaction_resolver_timeout\" parameter\n>\n> Now, the default value of \"foreign_transaction_resolver_timeout\" is 60 secs.\n> Is there any reason? Although the following is minor case, it may confuse some\n> users.\n>\n> Example case is that\n>\n> 1. a client executes transaction with 2PC when the resolver is processing\n> FdwXactResolverProcessInDoubtXacts().\n>\n> 2. the resolution of 1st transaction must be waited until the other\n> transactions for 2pc are executed or timeout.\n>\n> 3. if the client check the 1st result value, it should wait until resolution\n> is finished for atomic visibility (although it depends on the way how to\n> realize atomic visibility.) The clients may be waited\n> foreign_transaction_resolver_timeout\". Users may think it's stale.\n>\n> Like this situation can be observed after testing with pgbench. Some\n> unresolved transaction remains after benchmarking.\n>\n> I assume that this default value refers to wal_sender, archiver, and so on.\n> But, I think this parameter is more like \"commit_delay\". If so, 60 seconds\n> seems to be big value.\n\nIIUC this situation seems like the foreign transaction resolution is\nbottle-neck and doesn’t catch up to incoming resolution requests. But\nhow foreignt_transaction_resolver_timeout relates to this situation?\nforeign_transaction_resolver_timeout controls when to terminate the\nresolver process that doesn't have any foreign transactions to\nresolve. So if we set it several milliseconds, resolver processes are\nterminated immediately after each resolution, imposing the cost of\nlaunching resolver processes on the next resolution.\n\n>\n>\n> b. about performance bottleneck (just share my simple benchmark results)\n>\n> The resolver process can be performance bottleneck easily although I think\n> some users want this feature even if the performance is not so good.\n>\n> I tested with very simple workload in my laptop.\n>\n> The test condition is\n> * two remote foreign partitions and one transaction inserts an entry in each\n> partitions.\n> * local connection only. If NW latency became higher, the performance became\n> worse.\n> * pgbench with 8 clients.\n>\n> The test results is the following. The performance of 2PC is only 10%\n> performance of the one of without 2PC.\n>\n> * with foreign_twophase_commit = requried\n> -> If load with more than 10TPS, the number of unresolved foreign transactions\n> is increasing and stop with the warning \"Increase\n> max_prepared_foreign_transactions\".\n\nWhat was the value of max_prepared_foreign_transactions?\n\nTo speed up the foreign transaction resolution, some ideas have been\ndiscussed. As another idea, how about launching resolvers for each\nforeign server? That way, we resolve foreign transactions on each\nforeign server in parallel. If foreign transactions are concentrated\non the particular server, we can have multiple resolvers for the one\nforeign server. It doesn’t change the fact that all foreign\ntransaction resolutions are processed by resolver processes.\n\nApart from that, we also might want to improve foreign transaction\nmanagement so that transaction doesn’t end up with an error if the\nforeign transaction resolution doesn’t catch up with incoming\ntransactions that require 2PC. Maybe we can evict and serialize a\nstate file when FdwXactCtl->xacts[] is full. I’d like to leave it as a\nfuture improvement.\n\n> * with foreign_twophase_commit = disabled\n> -> 122TPS in my environments.\n\nHow much is the performance without those 2PC patches and with the\nsame workload? i.e., how fast is the current postgres_fdw that uses\nXactCallback?\n\n>\n>\n> c. v36-0001-Introduce-transaction-manager-for-foreign-transa.patch\n>\n> * typo: s/tranasction/transaction/\n>\n> * Is it better to move AtEOXact_FdwXact() in AbortTransaction() to before \"if\n> (IsInParallelMode())\" because make them in the same order as CommitTransaction()?\n\nI'd prefer to move AtEOXact_FdwXact() in CommitTransaction after \"if\n(IsInParallelMode())\" since other pre-commit works are done after\ncleaning parallel contexts. What do you think?\n\n>\n> * functions name of fdwxact.c\n>\n> Although this depends on my feeling, xact means transaction. If this feeling\n> same as you, the function names of FdwXactRegisterXact and so on are odd to\n> me. FdwXactRegisterEntry or FdwXactRegisterParticipant is better?\n>\n\nFdwXactRegisterEntry sounds good to me. Thanks.\n\n> * Are the following better?\n>\n> - s/to register the foreign transaction by/to register the foreign transaction\n> participant by/\n>\n> - s/The registered foreign transactions/The registered participants/\n>\n> - s/given foreign transaction/given foreign transaction participant/\n>\n> - s/Foreign transactions involved in the current transaction/Foreign\n> transaction participants involved in the current transaction/\n\nAgreed with the above suggestions.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 21 May 2021 10:39:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/05/21 10:39, Masahiko Sawada wrote:\n> On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>\n>>\n>> On 2021/05/11 13:37, Masahiko Sawada wrote:\n>>> I've attached the updated patches that incorporated comments from\n>>> Zhihong and Ikeda-san.\n>>\n>> Thanks for updating the patches!\n>>\n>>\n>> I have other comments including trivial things.\n>>\n>>\n>> a. about \"foreign_transaction_resolver_timeout\" parameter\n>>\n>> Now, the default value of \"foreign_transaction_resolver_timeout\" is 60 secs.\n>> Is there any reason? Although the following is minor case, it may confuse some\n>> users.\n>>\n>> Example case is that\n>>\n>> 1. a client executes transaction with 2PC when the resolver is processing\n>> FdwXactResolverProcessInDoubtXacts().\n>>\n>> 2. the resolution of 1st transaction must be waited until the other\n>> transactions for 2pc are executed or timeout.\n>>\n>> 3. if the client check the 1st result value, it should wait until resolution\n>> is finished for atomic visibility (although it depends on the way how to\n>> realize atomic visibility.) The clients may be waited\n>> foreign_transaction_resolver_timeout\". Users may think it's stale.\n>>\n>> Like this situation can be observed after testing with pgbench. Some\n>> unresolved transaction remains after benchmarking.\n>>\n>> I assume that this default value refers to wal_sender, archiver, and so on.\n>> But, I think this parameter is more like \"commit_delay\". If so, 60 seconds\n>> seems to be big value.\n> \n> IIUC this situation seems like the foreign transaction resolution is\n> bottle-neck and doesn’t catch up to incoming resolution requests. But\n> how foreignt_transaction_resolver_timeout relates to this situation?\n> foreign_transaction_resolver_timeout controls when to terminate the\n> resolver process that doesn't have any foreign transactions to\n> resolve. So if we set it several milliseconds, resolver processes are\n> terminated immediately after each resolution, imposing the cost of\n> launching resolver processes on the next resolution.\n\nThanks for your comments!\n\nNo, this situation is not related to the foreign transaction resolution is\nbottle-neck or not. This issue may happen when the workload has very few\nforeign transactions.\n\nIf new foreign transaction comes while the transaction resolver is processing\nresolutions via FdwXactResolverProcessInDoubtXacts(), the foreign transaction\nwaits until starting next transaction resolution. If next foreign transaction\ndoesn't come, the foreign transaction must wait starting resolution until\ntimeout. I mentioned this situation.\n\nThanks for letting me know the side effect if setting resolution timeout to\nseveral milliseconds. I agree. But, why termination is needed? Is there a\npossibility to stale like walsender?\n\n\n>>\n>>\n>> b. about performance bottleneck (just share my simple benchmark results)\n>>\n>> The resolver process can be performance bottleneck easily although I think\n>> some users want this feature even if the performance is not so good.\n>>\n>> I tested with very simple workload in my laptop.\n>>\n>> The test condition is\n>> * two remote foreign partitions and one transaction inserts an entry in each\n>> partitions.\n>> * local connection only. If NW latency became higher, the performance became\n>> worse.\n>> * pgbench with 8 clients.\n>>\n>> The test results is the following. The performance of 2PC is only 10%\n>> performance of the one of without 2PC.\n>>\n>> * with foreign_twophase_commit = requried\n>> -> If load with more than 10TPS, the number of unresolved foreign transactions\n>> is increasing and stop with the warning \"Increase\n>> max_prepared_foreign_transactions\".\n> \n> What was the value of max_prepared_foreign_transactions?\n\nNow, I tested with 200.\n\nIf each resolution is finished very soon, I thought it's enough because\n8clients x 2partitions = 16, though... But, it's difficult how to know the\nstable values.\n\n\n> To speed up the foreign transaction resolution, some ideas have been\n> discussed. As another idea, how about launching resolvers for each\n> foreign server? That way, we resolve foreign transactions on each\n> foreign server in parallel. If foreign transactions are concentrated\n> on the particular server, we can have multiple resolvers for the one\n> foreign server. It doesn’t change the fact that all foreign\n> transaction resolutions are processed by resolver processes.\n\nAwesome! There seems to be another pros that even if a foreign server is\ntemporarily busy or stopped due to fail over, other foreign server's\ntransactions can be resolved.\n\n\n\n> Apart from that, we also might want to improve foreign transaction\n> management so that transaction doesn’t end up with an error if the\n> foreign transaction resolution doesn’t catch up with incoming\n> transactions that require 2PC. Maybe we can evict and serialize a\n> state file when FdwXactCtl->xacts[] is full. I’d like to leave it as a\n> future improvement.\n\nOh, great! I didn't come up with the idea.\n\nAlthough I thought the feature makes difficult to know the foreign transaction\nis resolved stably, DBAs can check \"pg_foreign_xacts\" view now and it's enough\nto output the situation of foreign transactions are spilled to the log.\n\n\n>> * with foreign_twophase_commit = disabled\n>> -> 122TPS in my environments.\n> \n> How much is the performance without those 2PC patches and with the\n> same workload? i.e., how fast is the current postgres_fdw that uses\n> XactCallback?\n\nOK, I'll test.\n\n\n>> c. v36-0001-Introduce-transaction-manager-for-foreign-transa.patch\n>>\n>> * typo: s/tranasction/transaction/\n>>\n>> * Is it better to move AtEOXact_FdwXact() in AbortTransaction() to before \"if\n>> (IsInParallelMode())\" because make them in the same order as CommitTransaction()?\n> \n> I'd prefer to move AtEOXact_FdwXact() in CommitTransaction after \"if\n> (IsInParallelMode())\" since other pre-commit works are done after\n> cleaning parallel contexts. What do you think?\n\nOK, I agree.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 May 2021 12:45:46 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, May 21, 2021 at 12:45 PM Masahiro Ikeda\n<ikedamsh@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/05/21 10:39, Masahiko Sawada wrote:\n> > On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> >>\n> >>\n> >> On 2021/05/11 13:37, Masahiko Sawada wrote:\n> >>> I've attached the updated patches that incorporated comments from\n> >>> Zhihong and Ikeda-san.\n> >>\n> >> Thanks for updating the patches!\n> >>\n> >>\n> >> I have other comments including trivial things.\n> >>\n> >>\n> >> a. about \"foreign_transaction_resolver_timeout\" parameter\n> >>\n> >> Now, the default value of \"foreign_transaction_resolver_timeout\" is 60 secs.\n> >> Is there any reason? Although the following is minor case, it may confuse some\n> >> users.\n> >>\n> >> Example case is that\n> >>\n> >> 1. a client executes transaction with 2PC when the resolver is processing\n> >> FdwXactResolverProcessInDoubtXacts().\n> >>\n> >> 2. the resolution of 1st transaction must be waited until the other\n> >> transactions for 2pc are executed or timeout.\n> >>\n> >> 3. if the client check the 1st result value, it should wait until resolution\n> >> is finished for atomic visibility (although it depends on the way how to\n> >> realize atomic visibility.) The clients may be waited\n> >> foreign_transaction_resolver_timeout\". Users may think it's stale.\n> >>\n> >> Like this situation can be observed after testing with pgbench. Some\n> >> unresolved transaction remains after benchmarking.\n> >>\n> >> I assume that this default value refers to wal_sender, archiver, and so on.\n> >> But, I think this parameter is more like \"commit_delay\". If so, 60 seconds\n> >> seems to be big value.\n> >\n> > IIUC this situation seems like the foreign transaction resolution is\n> > bottle-neck and doesn’t catch up to incoming resolution requests. But\n> > how foreignt_transaction_resolver_timeout relates to this situation?\n> > foreign_transaction_resolver_timeout controls when to terminate the\n> > resolver process that doesn't have any foreign transactions to\n> > resolve. So if we set it several milliseconds, resolver processes are\n> > terminated immediately after each resolution, imposing the cost of\n> > launching resolver processes on the next resolution.\n>\n> Thanks for your comments!\n>\n> No, this situation is not related to the foreign transaction resolution is\n> bottle-neck or not. This issue may happen when the workload has very few\n> foreign transactions.\n>\n> If new foreign transaction comes while the transaction resolver is processing\n> resolutions via FdwXactResolverProcessInDoubtXacts(), the foreign transaction\n> waits until starting next transaction resolution. If next foreign transaction\n> doesn't come, the foreign transaction must wait starting resolution until\n> timeout. I mentioned this situation.\n\nThanks for your explanation. I think that in this case we should set\nthe latch of the resolver after preparing all foreign transactions so\nthat the resolver process those transactions without sleep.\n\n>\n> Thanks for letting me know the side effect if setting resolution timeout to\n> several milliseconds. I agree. But, why termination is needed? Is there a\n> possibility to stale like walsender?\n\nThe purpose of this timeout is to terminate resolvers that are idle\nfor a long time. The resolver processes don't necessarily need to keep\nrunning all the time for every database. On the other hand, launching\na resolver process per commit would be a high cost. So we have\nresolver processes keep running at least for\nforeign_transaction_resolver_timeout.\n\n>\n>\n> >>\n> >>\n> >> b. about performance bottleneck (just share my simple benchmark results)\n> >>\n> >> The resolver process can be performance bottleneck easily although I think\n> >> some users want this feature even if the performance is not so good.\n> >>\n> >> I tested with very simple workload in my laptop.\n> >>\n> >> The test condition is\n> >> * two remote foreign partitions and one transaction inserts an entry in each\n> >> partitions.\n> >> * local connection only. If NW latency became higher, the performance became\n> >> worse.\n> >> * pgbench with 8 clients.\n> >>\n> >> The test results is the following. The performance of 2PC is only 10%\n> >> performance of the one of without 2PC.\n> >>\n> >> * with foreign_twophase_commit = requried\n> >> -> If load with more than 10TPS, the number of unresolved foreign transactions\n> >> is increasing and stop with the warning \"Increase\n> >> max_prepared_foreign_transactions\".\n> >\n> > What was the value of max_prepared_foreign_transactions?\n>\n> Now, I tested with 200.\n>\n> If each resolution is finished very soon, I thought it's enough because\n> 8clients x 2partitions = 16, though... But, it's difficult how to know the\n> stable values.\n\nDuring resolving one distributed transaction, the resolver needs both\none round trip and fsync-ing WAL record for each foreign transaction.\nSince the client doesn’t wait for the distributed transaction to be\nresolved, the resolver process can be easily bottle-neck given there\nare 8 clients.\n\nIf foreign transaction resolution was resolved synchronously, 16 would suffice.\n\n>\n>\n> > To speed up the foreign transaction resolution, some ideas have been\n> > discussed. As another idea, how about launching resolvers for each\n> > foreign server? That way, we resolve foreign transactions on each\n> > foreign server in parallel. If foreign transactions are concentrated\n> > on the particular server, we can have multiple resolvers for the one\n> > foreign server. It doesn’t change the fact that all foreign\n> > transaction resolutions are processed by resolver processes.\n>\n> Awesome! There seems to be another pros that even if a foreign server is\n> temporarily busy or stopped due to fail over, other foreign server's\n> transactions can be resolved.\n\nYes. We also might need to be careful about the order of foreign\ntransaction resolution. I think we need to resolve foreign\ntransactions in arrival order at least within a foreign server.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 21 May 2021 13:45:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/05/21 13:45, Masahiko Sawada wrote:\n> On Fri, May 21, 2021 at 12:45 PM Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2021/05/21 10:39, Masahiko Sawada wrote:\n>>> On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>> On 2021/05/11 13:37, Masahiko Sawada wrote:\n>>>>> I've attached the updated patches that incorporated comments from\n>>>>> Zhihong and Ikeda-san.\n>>>>\n>>>> Thanks for updating the patches!\n>>>>\n>>>>\n>>>> I have other comments including trivial things.\n>>>>\n>>>>\n>>>> a. about \"foreign_transaction_resolver_timeout\" parameter\n>>>>\n>>>> Now, the default value of \"foreign_transaction_resolver_timeout\" is 60 secs.\n>>>> Is there any reason? Although the following is minor case, it may confuse some\n>>>> users.\n>>>>\n>>>> Example case is that\n>>>>\n>>>> 1. a client executes transaction with 2PC when the resolver is processing\n>>>> FdwXactResolverProcessInDoubtXacts().\n>>>>\n>>>> 2. the resolution of 1st transaction must be waited until the other\n>>>> transactions for 2pc are executed or timeout.\n>>>>\n>>>> 3. if the client check the 1st result value, it should wait until resolution\n>>>> is finished for atomic visibility (although it depends on the way how to\n>>>> realize atomic visibility.) The clients may be waited\n>>>> foreign_transaction_resolver_timeout\". Users may think it's stale.\n>>>>\n>>>> Like this situation can be observed after testing with pgbench. Some\n>>>> unresolved transaction remains after benchmarking.\n>>>>\n>>>> I assume that this default value refers to wal_sender, archiver, and so on.\n>>>> But, I think this parameter is more like \"commit_delay\". If so, 60 seconds\n>>>> seems to be big value.\n>>>\n>>> IIUC this situation seems like the foreign transaction resolution is\n>>> bottle-neck and doesn’t catch up to incoming resolution requests. But\n>>> how foreignt_transaction_resolver_timeout relates to this situation?\n>>> foreign_transaction_resolver_timeout controls when to terminate the\n>>> resolver process that doesn't have any foreign transactions to\n>>> resolve. So if we set it several milliseconds, resolver processes are\n>>> terminated immediately after each resolution, imposing the cost of\n>>> launching resolver processes on the next resolution.\n>>\n>> Thanks for your comments!\n>>\n>> No, this situation is not related to the foreign transaction resolution is\n>> bottle-neck or not. This issue may happen when the workload has very few\n>> foreign transactions.\n>>\n>> If new foreign transaction comes while the transaction resolver is processing\n>> resolutions via FdwXactResolverProcessInDoubtXacts(), the foreign transaction\n>> waits until starting next transaction resolution. If next foreign transaction\n>> doesn't come, the foreign transaction must wait starting resolution until\n>> timeout. I mentioned this situation.\n> \n> Thanks for your explanation. I think that in this case we should set\n> the latch of the resolver after preparing all foreign transactions so\n> that the resolver process those transactions without sleep.\n\nYes, your idea is much better. Thanks!\n\n\n>>\n>> Thanks for letting me know the side effect if setting resolution timeout to\n>> several milliseconds. I agree. But, why termination is needed? Is there a\n>> possibility to stale like walsender?\n> \n> The purpose of this timeout is to terminate resolvers that are idle\n> for a long time. The resolver processes don't necessarily need to keep\n> running all the time for every database. On the other hand, launching\n> a resolver process per commit would be a high cost. So we have\n> resolver processes keep running at least for\n> foreign_transaction_resolver_timeout.\nUnderstood. I think it's reasonable.\n\n\n>>>>\n>>>>\n>>>> b. about performance bottleneck (just share my simple benchmark results)\n>>>>\n>>>> The resolver process can be performance bottleneck easily although I think\n>>>> some users want this feature even if the performance is not so good.\n>>>>\n>>>> I tested with very simple workload in my laptop.\n>>>>\n>>>> The test condition is\n>>>> * two remote foreign partitions and one transaction inserts an entry in each\n>>>> partitions.\n>>>> * local connection only. If NW latency became higher, the performance became\n>>>> worse.\n>>>> * pgbench with 8 clients.\n>>>>\n>>>> The test results is the following. The performance of 2PC is only 10%\n>>>> performance of the one of without 2PC.\n>>>>\n>>>> * with foreign_twophase_commit = requried\n>>>> -> If load with more than 10TPS, the number of unresolved foreign transactions\n>>>> is increasing and stop with the warning \"Increase\n>>>> max_prepared_foreign_transactions\".\n>>>\n>>> What was the value of max_prepared_foreign_transactions?\n>>\n>> Now, I tested with 200.\n>>\n>> If each resolution is finished very soon, I thought it's enough because\n>> 8clients x 2partitions = 16, though... But, it's difficult how to know the\n>> stable values.\n> \n> During resolving one distributed transaction, the resolver needs both\n> one round trip and fsync-ing WAL record for each foreign transaction.\n> Since the client doesn’t wait for the distributed transaction to be\n> resolved, the resolver process can be easily bottle-neck given there\n> are 8 clients.\n> \n> If foreign transaction resolution was resolved synchronously, 16 would suffice.\n\nOK, thanks.\n\n\n>>\n>>\n>>> To speed up the foreign transaction resolution, some ideas have been\n>>> discussed. As another idea, how about launching resolvers for each\n>>> foreign server? That way, we resolve foreign transactions on each\n>>> foreign server in parallel. If foreign transactions are concentrated\n>>> on the particular server, we can have multiple resolvers for the one\n>>> foreign server. It doesn’t change the fact that all foreign\n>>> transaction resolutions are processed by resolver processes.\n>>\n>> Awesome! There seems to be another pros that even if a foreign server is\n>> temporarily busy or stopped due to fail over, other foreign server's\n>> transactions can be resolved.\n> \n> Yes. We also might need to be careful about the order of foreign\n> transaction resolution. I think we need to resolve foreign> transactions in arrival order at least within a foreign server.\n\nI agree it's better.\n\n(Although this is my interest...)\nIs it necessary? Although this idea seems to be for atomic visibility,\n2PC can't realize that as you know. So, I wondered that.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 May 2021 17:48:08 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, May 21, 2021 at 5:48 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/05/21 13:45, Masahiko Sawada wrote:\n> >\n> > Yes. We also might need to be careful about the order of foreign\n> > transaction resolution. I think we need to resolve foreign> transactions in arrival order at least within a foreign server.\n>\n> I agree it's better.\n>\n> (Although this is my interest...)\n> Is it necessary? Although this idea seems to be for atomic visibility,\n> 2PC can't realize that as you know. So, I wondered that.\n\nI think it's for fairness. If a foreign transaction arrived earlier\ngets put off so often for other foreign transactions arrived later due\nto its index in FdwXactCtl->xacts, it’s not understandable for users\nand not fair. I think it’s better to handle foreign transactions in\nFIFO manner (although this problem exists even in the current code).\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 25 May 2021 21:59:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/05/25 21:59, Masahiko Sawada wrote:\n> On Fri, May 21, 2021 at 5:48 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>\n>> On 2021/05/21 13:45, Masahiko Sawada wrote:\n>>>\n>>> Yes. We also might need to be careful about the order of foreign\n>>> transaction resolution. I think we need to resolve foreign> transactions in arrival order at least within a foreign server.\n>>\n>> I agree it's better.\n>>\n>> (Although this is my interest...)\n>> Is it necessary? Although this idea seems to be for atomic visibility,\n>> 2PC can't realize that as you know. So, I wondered that.\n> \n> I think it's for fairness. If a foreign transaction arrived earlier\n> gets put off so often for other foreign transactions arrived later due\n> to its index in FdwXactCtl->xacts, it’s not understandable for users\n> and not fair. I think it’s better to handle foreign transactions in\n> FIFO manner (although this problem exists even in the current code).\n\nOK, thanks.\n\n\nOn 2021/05/21 12:45, Masahiro Ikeda wrote:\n> On 2021/05/21 10:39, Masahiko Sawada wrote:\n>> On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nwrote:\n>> How much is the performance without those 2PC patches and with the\n>> same workload? i.e., how fast is the current postgres_fdw that uses\n>> XactCallback?\n>\n> OK, I'll test.\n\nThe test results are followings. But, I couldn't confirm the performance\nimprovements of 2PC patches though I may need to be changed the test condition.\n\n[condition]\n* 1 coordinator and 3 foreign servers\n* There are two custom scripts which access different two foreign servers per\ntransaction\n\n``` fxact_select.pgbench\nBEGIN;\nSELECT * FROM part:p1 WHERE id = :id;\nSELECT * FROM part:p2 WHERE id = :id;\nCOMMIT;\n```\n\n``` fxact_update.pgbench\nBEGIN;\nUPDATE part:p1 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\nUPDATE part:p2 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\nCOMMIT;\n```\n\n[results]\n\nI have tested three times.\nPerformance difference seems to be within the range of errors.\n\n# 6d0eb38557 with 2pc patches(v36) and foreign_twophase_commit = disable\n- fxact_update.pgbench\n72.3, 74.9, 77.5 TPS => avg 74.9 TPS\n110.5, 106.8, 103.2 ms => avg 106.8 ms\n\n- fxact_select.pgbench\n1767.6, 1737.1, 1717.4 TPS => avg 1740.7 TPS\n4.5, 4.6, 4.7 ms => avg 4.6ms\n\n# 6d0eb38557 without 2pc patches\n- fxact_update.pgbench\n76.5, 70.6, 69.5 TPS => avg 72.2 TPS\n104.534 + 113.244 + 115.097 => avg 111.0 ms\n\n-fxact_select.pgbench\n1810.2, 1748.3, 1737.2 TPS => avg 1765.2 TPS\n4.2, 4.6, 4.6 ms=> 4.5 ms\n\n\n\n\n\n# About the bottleneck of the resolver process\n\nI investigated the performance bottleneck of the resolver process using perf.\nThe main bottleneck is the following functions.\n\n1st. 42.8% routine->CommitForeignTransaction()\n2nd. 31.5% remove_fdwxact()\n3rd. 10.16% CommitTransaction()\n\n1st and 3rd problems can be solved by parallelizing resolver processes per\nremote servers. But, I wondered that the idea, which backends call also\n\"COMMIT/ABORT PREPARED\" and the resolver process only takes changes of\nresolving in-doubt foreign transactions, is better. In many cases, I think\nthat the number of connections is much greater than the number of remote\nservers. If so, the parallelization is not enough.\n\nSo, I think the idea which backends execute \"PREPARED COMMIT\" synchronously is\nbetter. The citus has the 2PC feature and backends send \"PREPARED COMMIT\" in\nthe extension. So, this idea is not bad.\n\nAlthough resolving asynchronously has the performance benefit, we can't take\nadvantage because the resolver process can be bottleneck easily now.\n\n\n2nd remove_fdwxact() syncs the WAL, which indicates the foreign transaction\nentry is removed. Is it necessary to sync momentarily?\n\nTo remove syncing leads the time of recovery phase may be longer because some\nfdxact entries need to \"COMMIT/ABORT PREPARED\" again. But I think the effect\nis limited.\n\n\n# About other trivial comments.\n\n* Is it better to call pgstat_send_wal() in the resolver process?\n\n* Is it better to specify that only one resolver process can be launched in on\ndatabase on the descrpition of \"max_foreign_transaction_resolvers\"?\n\n* Is it intentional that removing and inserting new lines in foreigncmds.c?\n\n* Is it better that \"max_prepared_foreign_transactions=%d\" is after\n\"max_prepared_xacts=%d\" in xlogdesc.c?\n\n* Is \"fdwxact_queue\" unnecessary now?\n\n* Is the following \" + sizeof(FdwXactResolver)\" unnecessary?\n\n#define SizeOfFdwXactResolverCtlData \\\n\t(offsetof(FdwXactResolverCtlData, resolvers) + sizeof(FdwXactResolver))\n\nAlthough MultiXactStateData considered the backendIds start from 1 indexed,\nthe resolvers start from 0 indexed. Sorry, if my understanding is wrong.\n\n* s/transaciton/transaction/\n\n* s/foreign_xact_resolution_retry_interval since last\nresolver/foreign_xact_resolution_retry_interval since last resolver was/\n\n* Don't we need the debug log in the following in postgres.c like logical\nlauncher shutdown?\n\n\telse if (IsFdwXactLauncher())\n\t{\n\t\t/*\n\t\t* The foreign transaction launcher can be stopped at any time.\n\t\t* Use exit status 1 so the background worker is restarted.\n\t\t*/\n\t\tproc_exit(1);\n\t}\n\n* Is pg_stop_foreign_xact_resolver(PG_FUNCTION_ARGS) not documented?\n\n* Is it better from \"when arrived a requested by backend process.\" to\n\"when a request by backend process is arrived.\"?\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 3 Jun 2021 13:56:49 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 1:56 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/05/25 21:59, Masahiko Sawada wrote:\n> > On Fri, May 21, 2021 at 5:48 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> >>\n> >> On 2021/05/21 13:45, Masahiko Sawada wrote:\n> >>>\n> >>> Yes. We also might need to be careful about the order of foreign\n> >>> transaction resolution. I think we need to resolve foreign> transactions in arrival order at least within a foreign server.\n> >>\n> >> I agree it's better.\n> >>\n> >> (Although this is my interest...)\n> >> Is it necessary? Although this idea seems to be for atomic visibility,\n> >> 2PC can't realize that as you know. So, I wondered that.\n> >\n> > I think it's for fairness. If a foreign transaction arrived earlier\n> > gets put off so often for other foreign transactions arrived later due\n> > to its index in FdwXactCtl->xacts, it’s not understandable for users\n> > and not fair. I think it’s better to handle foreign transactions in\n> > FIFO manner (although this problem exists even in the current code).\n>\n> OK, thanks.\n>\n>\n> On 2021/05/21 12:45, Masahiro Ikeda wrote:\n> > On 2021/05/21 10:39, Masahiko Sawada wrote:\n> >> On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n> wrote:\n> >> How much is the performance without those 2PC patches and with the\n> >> same workload? i.e., how fast is the current postgres_fdw that uses\n> >> XactCallback?\n> >\n> > OK, I'll test.\n>\n> The test results are followings. But, I couldn't confirm the performance\n> improvements of 2PC patches though I may need to be changed the test condition.\n>\n> [condition]\n> * 1 coordinator and 3 foreign servers\n> * There are two custom scripts which access different two foreign servers per\n> transaction\n>\n> ``` fxact_select.pgbench\n> BEGIN;\n> SELECT * FROM part:p1 WHERE id = :id;\n> SELECT * FROM part:p2 WHERE id = :id;\n> COMMIT;\n> ```\n>\n> ``` fxact_update.pgbench\n> BEGIN;\n> UPDATE part:p1 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\n> UPDATE part:p2 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\n> COMMIT;\n> ```\n>\n> [results]\n>\n> I have tested three times.\n> Performance difference seems to be within the range of errors.\n>\n> # 6d0eb38557 with 2pc patches(v36) and foreign_twophase_commit = disable\n> - fxact_update.pgbench\n> 72.3, 74.9, 77.5 TPS => avg 74.9 TPS\n> 110.5, 106.8, 103.2 ms => avg 106.8 ms\n>\n> - fxact_select.pgbench\n> 1767.6, 1737.1, 1717.4 TPS => avg 1740.7 TPS\n> 4.5, 4.6, 4.7 ms => avg 4.6ms\n>\n> # 6d0eb38557 without 2pc patches\n> - fxact_update.pgbench\n> 76.5, 70.6, 69.5 TPS => avg 72.2 TPS\n> 104.534 + 113.244 + 115.097 => avg 111.0 ms\n>\n> -fxact_select.pgbench\n> 1810.2, 1748.3, 1737.2 TPS => avg 1765.2 TPS\n> 4.2, 4.6, 4.6 ms=> 4.5 ms\n>\n\nThank you for testing!\n\nI think the result shows that managing foreign transactions on the\ncore side would not be a problem in terms of performance.\n\n>\n>\n>\n>\n> # About the bottleneck of the resolver process\n>\n> I investigated the performance bottleneck of the resolver process using perf.\n> The main bottleneck is the following functions.\n>\n> 1st. 42.8% routine->CommitForeignTransaction()\n> 2nd. 31.5% remove_fdwxact()\n> 3rd. 10.16% CommitTransaction()\n>\n> 1st and 3rd problems can be solved by parallelizing resolver processes per\n> remote servers. But, I wondered that the idea, which backends call also\n> \"COMMIT/ABORT PREPARED\" and the resolver process only takes changes of\n> resolving in-doubt foreign transactions, is better. In many cases, I think\n> that the number of connections is much greater than the number of remote\n> servers. If so, the parallelization is not enough.\n>\n> So, I think the idea which backends execute \"PREPARED COMMIT\" synchronously is\n> better. The citus has the 2PC feature and backends send \"PREPARED COMMIT\" in\n> the extension. So, this idea is not bad.\n\nThank you for pointing it out. This idea has been proposed several\ntimes and there were discussions. I'd like to summarize the proposed\nideas and those pros and cons before replying to your other comments.\n\nThere are 3 ideas. After backend both prepares all foreign transaction\nand commit the local transaction,\n\n1. the backend continues attempting to commit all prepared foreign\ntransactions until all of them are committed.\n2. the backend attempts to commit all prepared foreign transactions\nonce. If an error happens, leave them for the resolver.\n3. the backend asks the resolver that launched per foreign server to\ncommit the prepared foreign transactions (and backend waits or doesn't\nwait for the commit completion depending on the setting).\n\nWith ideas 1 and 2, since the backend itself commits all foreign\ntransactions the resolver process cannot be a bottleneck, and probably\nthe code can get more simple as backends don't need to communicate\nwith resolver processes.\n\nHowever, those have two problems we need to deal with:\n\nFirst, users could get an error if an error happens during the backend\ncommitting prepared foreign transaction but the local transaction is\nalready committed and some foreign transactions could also be\ncommitted, confusing users. There were two opinions to this problem:\nFDW developers should be responsible for writing FDW code such that\nany error doesn't happen during committing foreign transactions, and\nusers can accept that confusion since an error could happen after\nwriting the commit WAL even today without this 2PC feature. For the\nformer point, I'm not sure it's always doable since even palloc()\ncould raise an error and it seems hard to require all FDW developers\nto understand all possible paths of raising an error. And for the\nlatter point, that's true but I think those cases are\nshould-not-happen cases (i.g., rare cases) whereas the likelihood of\nan error during committing prepared transactions is not low (e.g., by\nnetwork connectivity problem). I think we need to assume that that is\nnot a rare case.\n\nThe second problem is whether we can cancel committing foreign\ntransactions by pg_cancel_backend() (or pressing Ctl-c). If the\nbackend process commits prepared foreign transactions, it's FDW\ndevelopers' responsibility to write code that is interruptible. I’m\nnot sure it’s feasible for drivers for other databases.\n\nIdea 3 is proposed to deal with those problems. By having separate\nprocesses, resolver processes, committing prepared foreign\ntransactions, we and FDW developers don't need to worry about those\ntwo problems.\n\nHowever as Ikeda-san shared the performance results, idea 3 is likely\nto have a performance problem since resolver processes can easily be\nbottle-neck. Moreover, with the current patch, since we asynchronously\ncommit foreign prepared transactions, if many concurrent clients use\n2PC, reaching max_foreign_prepared_transactions, transactions end up\nwith an error.\n\nThrough the long discussion on this thread, I've been thought we got a\nconsensus on idea 3 but sometimes ideas 1 and 2 are proposed again for\ndealing with the performance problem. Idea 1 and 2 are also good and\nattractive, but I think we need to deal with the two problems first if\nwe go with one of those ideas. To be honest, I'm really not sure it's\ngood if we make those things FDW developers responsibility.\n\nAs long as we commit foreign prepared transactions asynchronously and\nthere is max_foreign_prepared_transactions limit, it's possible that\ncommitting those transactions could not keep up. Maybe the same is\ntrue for a case where the client heavily uses 2PC and asynchronously\ncommits prepared transactions. If committing prepared transactions\ndoesn't keep up with preparing transactions, the system reaches\nmax_prepared_transactions.\n\nWith the current patch, we commit prepared foreign transactions\nasynchronously. But maybe we need to compare the performance of ideas\n1 (and 2) to idea 3 with synchronous foreign transaction resolution.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 4 Jun 2021 12:28:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "> 2021/06/04 12:28、Masahiko Sawada <sawada.mshk@gmail.com>のメール:\n> \n> On Thu, Jun 3, 2021 at 1:56 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com <mailto:ikedamsh@oss.nttdata.com>> wrote:\n>> \n>> \n>> \n>> On 2021/05/25 21:59, Masahiko Sawada wrote:\n>>> On Fri, May 21, 2021 at 5:48 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>>> \n>>>> On 2021/05/21 13:45, Masahiko Sawada wrote:\n>>>>> \n>>>>> Yes. We also might need to be careful about the order of foreign\n>>>>> transaction resolution. I think we need to resolve foreign> transactions in arrival order at least within a foreign server.\n>>>> \n>>>> I agree it's better.\n>>>> \n>>>> (Although this is my interest...)\n>>>> Is it necessary? Although this idea seems to be for atomic visibility,\n>>>> 2PC can't realize that as you know. So, I wondered that.\n>>> \n>>> I think it's for fairness. If a foreign transaction arrived earlier\n>>> gets put off so often for other foreign transactions arrived later due\n>>> to its index in FdwXactCtl->xacts, it’s not understandable for users\n>>> and not fair. I think it’s better to handle foreign transactions in\n>>> FIFO manner (although this problem exists even in the current code).\n>> \n>> OK, thanks.\n>> \n>> \n>> On 2021/05/21 12:45, Masahiro Ikeda wrote:\n>>> On 2021/05/21 10:39, Masahiko Sawada wrote:\n>>>> On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n>> wrote:\n>>>> How much is the performance without those 2PC patches and with the\n>>>> same workload? i.e., how fast is the current postgres_fdw that uses\n>>>> XactCallback?\n>>> \n>>> OK, I'll test.\n>> \n>> The test results are followings. But, I couldn't confirm the performance\n>> improvements of 2PC patches though I may need to be changed the test condition.\n>> \n>> [condition]\n>> * 1 coordinator and 3 foreign servers\n>> * There are two custom scripts which access different two foreign servers per\n>> transaction\n>> \n>> ``` fxact_select.pgbench\n>> BEGIN;\n>> SELECT * FROM part:p1 WHERE id = :id;\n>> SELECT * FROM part:p2 WHERE id = :id;\n>> COMMIT;\n>> ```\n>> \n>> ``` fxact_update.pgbench\n>> BEGIN;\n>> UPDATE part:p1 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\n>> UPDATE part:p2 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\n>> COMMIT;\n>> ```\n>> \n>> [results]\n>> \n>> I have tested three times.\n>> Performance difference seems to be within the range of errors.\n>> \n>> # 6d0eb38557 with 2pc patches(v36) and foreign_twophase_commit = disable\n>> - fxact_update.pgbench\n>> 72.3, 74.9, 77.5 TPS => avg 74.9 TPS\n>> 110.5, 106.8, 103.2 ms => avg 106.8 ms\n>> \n>> - fxact_select.pgbench\n>> 1767.6, 1737.1, 1717.4 TPS => avg 1740.7 TPS\n>> 4.5, 4.6, 4.7 ms => avg 4.6ms\n>> \n>> # 6d0eb38557 without 2pc patches\n>> - fxact_update.pgbench\n>> 76.5, 70.6, 69.5 TPS => avg 72.2 TPS\n>> 104.534 + 113.244 + 115.097 => avg 111.0 ms\n>> \n>> -fxact_select.pgbench\n>> 1810.2, 1748.3, 1737.2 TPS => avg 1765.2 TPS\n>> 4.2, 4.6, 4.6 ms=> 4.5 ms\n>> \n> \n> Thank you for testing!\n> \n> I think the result shows that managing foreign transactions on the\n> core side would not be a problem in terms of performance.\n> \n>> \n>> \n>> \n>> \n>> # About the bottleneck of the resolver process\n>> \n>> I investigated the performance bottleneck of the resolver process using perf.\n>> The main bottleneck is the following functions.\n>> \n>> 1st. 42.8% routine->CommitForeignTransaction()\n>> 2nd. 31.5% remove_fdwxact()\n>> 3rd. 10.16% CommitTransaction()\n>> \n>> 1st and 3rd problems can be solved by parallelizing resolver processes per\n>> remote servers. But, I wondered that the idea, which backends call also\n>> \"COMMIT/ABORT PREPARED\" and the resolver process only takes changes of\n>> resolving in-doubt foreign transactions, is better. In many cases, I think\n>> that the number of connections is much greater than the number of remote\n>> servers. If so, the parallelization is not enough.\n>> \n>> So, I think the idea which backends execute \"PREPARED COMMIT\" synchronously is\n>> better. The citus has the 2PC feature and backends send \"PREPARED COMMIT\" in\n>> the extension. So, this idea is not bad.\n> \n> Thank you for pointing it out. This idea has been proposed several\n> times and there were discussions. I'd like to summarize the proposed\n> ideas and those pros and cons before replying to your other comments.\n> \n> There are 3 ideas. After backend both prepares all foreign transaction\n> and commit the local transaction,\n> \n> 1. the backend continues attempting to commit all prepared foreign\n> transactions until all of them are committed.\n> 2. the backend attempts to commit all prepared foreign transactions\n> once. If an error happens, leave them for the resolver.\n> 3. the backend asks the resolver that launched per foreign server to\n> commit the prepared foreign transactions (and backend waits or doesn't\n> wait for the commit completion depending on the setting).\n> \n> With ideas 1 and 2, since the backend itself commits all foreign\n> transactions the resolver process cannot be a bottleneck, and probably\n> the code can get more simple as backends don't need to communicate\n> with resolver processes.\n> \n> However, those have two problems we need to deal with:\n\nThanks for sharing the summarize. I understood there are problems related to\nFDW implementation.\n\n> First, users could get an error if an error happens during the backend\n> committing prepared foreign transaction but the local transaction is\n> already committed and some foreign transactions could also be\n> committed, confusing users. There were two opinions to this problem:\n> FDW developers should be responsible for writing FDW code such that\n> any error doesn't happen during committing foreign transactions, and\n> users can accept that confusion since an error could happen after\n> writing the commit WAL even today without this 2PC feature. For the\n> former point, I'm not sure it's always doable since even palloc()\n> could raise an error and it seems hard to require all FDW developers\n> to understand all possible paths of raising an error. And for the\n> latter point, that's true but I think those cases are\n> should-not-happen cases (i.g., rare cases) whereas the likelihood of\n> an error during committing prepared transactions is not low (e.g., by\n> network connectivity problem). I think we need to assume that that is\n> not a rare case.\n\nHmm… Sorry, I don’t have any good ideas now.\n\nIf anything, I’m on second side which users accept the confusion though \nlet users know a error happens before local commit is done or not is necessary\nbecause if the former case, users will execute the same query again.\n\n\n> The second problem is whether we can cancel committing foreign\n> transactions by pg_cancel_backend() (or pressing Ctl-c). If the\n> backend process commits prepared foreign transactions, it's FDW\n> developers' responsibility to write code that is interruptible. I’m\n> not sure it’s feasible for drivers for other databases.\n\nSorry, my understanding is not clear.\n\nAfter all prepares are done, the foreign transactions will be committed.\nSo, does this mean that FDW must leave the unresolved transaction to the transaction\nresolver and show some messages like “Since the transaction is already committed,\nthe transaction will be resolved in background\" ?\n\n\n> Idea 3 is proposed to deal with those problems. By having separate\n> processes, resolver processes, committing prepared foreign\n> transactions, we and FDW developers don't need to worry about those\n> two problems.\n> \n> However as Ikeda-san shared the performance results, idea 3 is likely\n> to have a performance problem since resolver processes can easily be\n> bottle-neck. Moreover, with the current patch, since we asynchronously\n> commit foreign prepared transactions, if many concurrent clients use\n> 2PC, reaching max_foreign_prepared_transactions, transactions end up\n> with an error.\n> \n> Through the long discussion on this thread, I've been thought we got a\n> consensus on idea 3 but sometimes ideas 1 and 2 are proposed again for\n> dealing with the performance problem. Idea 1 and 2 are also good and\n> attractive, but I think we need to deal with the two problems first if\n> we go with one of those ideas. To be honest, I'm really not sure it's\n> good if we make those things FDW developers responsibility.\n> \n> As long as we commit foreign prepared transactions asynchronously and\n> there is max_foreign_prepared_transactions limit, it's possible that\n> committing those transactions could not keep up. Maybe the same is\n> true for a case where the client heavily uses 2PC and asynchronously\n> commits prepared transactions. If committing prepared transactions\n> doesn't keep up with preparing transactions, the system reaches\n> max_prepared_transactions.\n> \n> With the current patch, we commit prepared foreign transactions\n> asynchronously. But maybe we need to compare the performance of ideas\n> 1 (and 2) to idea 3 with synchronous foreign transaction resolution.\n\n\nOK, I understood the consensus is 3rd one. I agree it since I don’t have any solutions \nFor the problems related 1st and 2nd. If I find them, I’ll share you.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n2021/06/04 12:28、Masahiko Sawada <sawada.mshk@gmail.com>のメール:On Thu, Jun 3, 2021 at 1:56 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:On 2021/05/25 21:59, Masahiko Sawada wrote:On Fri, May 21, 2021 at 5:48 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:On 2021/05/21 13:45, Masahiko Sawada wrote:Yes. We also might need to be careful about the order of foreigntransaction resolution. I think we need to resolve foreign> transactions in arrival order at least within a foreign server.I agree it's better.(Although this is my interest...)Is it necessary? Although this idea seems to be for atomic visibility,2PC can't realize that as you know. So, I wondered that.I think it's for fairness. If a foreign transaction arrived earliergets put off so often for other foreign transactions arrived later dueto its index in FdwXactCtl->xacts, it’s not understandable for usersand not fair. I think it’s better to handle foreign transactions inFIFO manner (although this problem exists even in the current code).OK, thanks.On 2021/05/21 12:45, Masahiro Ikeda wrote:On 2021/05/21 10:39, Masahiko Sawada wrote:On Thu, May 20, 2021 at 1:26 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>wrote:How much is the performance without those 2PC patches and with thesame workload? i.e., how fast is the current postgres_fdw that usesXactCallback?OK, I'll test.The test results are followings. But, I couldn't confirm the performanceimprovements of 2PC patches though I may need to be changed the test condition.[condition]* 1 coordinator and 3 foreign servers* There are two custom scripts which access different two foreign servers pertransaction``` fxact_select.pgbenchBEGIN;SELECT * FROM part:p1 WHERE id = :id;SELECT * FROM part:p2 WHERE id = :id;COMMIT;`````` fxact_update.pgbenchBEGIN;UPDATE part:p1 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;UPDATE part:p2 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;COMMIT;```[results]I have tested three times.Performance difference seems to be within the range of errors.# 6d0eb38557 with 2pc patches(v36) and foreign_twophase_commit = disable- fxact_update.pgbench72.3, 74.9, 77.5 TPS => avg 74.9 TPS110.5, 106.8, 103.2 ms => avg 106.8 ms- fxact_select.pgbench1767.6, 1737.1, 1717.4 TPS => avg 1740.7 TPS4.5, 4.6, 4.7 ms => avg 4.6ms# 6d0eb38557 without 2pc patches- fxact_update.pgbench76.5, 70.6, 69.5 TPS => avg 72.2 TPS104.534 + 113.244 + 115.097 => avg 111.0 ms-fxact_select.pgbench1810.2, 1748.3, 1737.2 TPS => avg 1765.2 TPS4.2, 4.6, 4.6 ms=> 4.5 msThank you for testing!I think the result shows that managing foreign transactions on thecore side would not be a problem in terms of performance.# About the bottleneck of the resolver processI investigated the performance bottleneck of the resolver process using perf.The main bottleneck is the following functions.1st. 42.8% routine->CommitForeignTransaction()2nd. 31.5% remove_fdwxact()3rd. 10.16% CommitTransaction()1st and 3rd problems can be solved by parallelizing resolver processes perremote servers. But, I wondered that the idea, which backends call also\"COMMIT/ABORT PREPARED\" and the resolver process only takes changes ofresolving in-doubt foreign transactions, is better. In many cases, I thinkthat the number of connections is much greater than the number of remoteservers. If so, the parallelization is not enough.So, I think the idea which backends execute \"PREPARED COMMIT\" synchronously isbetter. The citus has the 2PC feature and backends send \"PREPARED COMMIT\" inthe extension. So, this idea is not bad.Thank you for pointing it out. This idea has been proposed severaltimes and there were discussions. I'd like to summarize the proposedideas and those pros and cons before replying to your other comments.There are 3 ideas. After backend both prepares all foreign transactionand commit the local transaction,1. the backend continues attempting to commit all prepared foreigntransactions until all of them are committed.2. the backend attempts to commit all prepared foreign transactionsonce. If an error happens, leave them for the resolver.3. the backend asks the resolver that launched per foreign server tocommit the prepared foreign transactions (and backend waits or doesn'twait for the commit completion depending on the setting).With ideas 1 and 2, since the backend itself commits all foreigntransactions the resolver process cannot be a bottleneck, and probablythe code can get more simple as backends don't need to communicatewith resolver processes.However, those have two problems we need to deal with:Thanks for sharing the summarize. I understood there are problems related toFDW implementation.First, users could get an error if an error happens during the backendcommitting prepared foreign transaction but the local transaction isalready committed and some foreign transactions could also becommitted, confusing users. There were two opinions to this problem:FDW developers should be responsible for writing FDW code such thatany error doesn't happen during committing foreign transactions, andusers can accept that confusion since an error could happen afterwriting the commit WAL even today without this 2PC feature. For theformer point, I'm not sure it's always doable since even palloc()could raise an error and it seems hard to require all FDW developersto understand all possible paths of raising an error. And for thelatter point, that's true but I think those cases areshould-not-happen cases (i.g., rare cases) whereas the likelihood ofan error during committing prepared transactions is not low (e.g., bynetwork connectivity problem). I think we need to assume that that isnot a rare case.Hmm… Sorry, I don’t have any good ideas now.If anything, I’m on second side which users accept the confusion though let users know a error happens before local commit is done or not is necessarybecause if the former case, users will execute the same query again.The second problem is whether we can cancel committing foreigntransactions by pg_cancel_backend() (or pressing Ctl-c). If thebackend process commits prepared foreign transactions, it's FDWdevelopers' responsibility to write code that is interruptible. I’mnot sure it’s feasible for drivers for other databases.Sorry, my understanding is not clear.After all prepares are done, the foreign transactions will be committed.So, does this mean that FDW must leave the unresolved transaction to the transactionresolver and show some messages like “Since the transaction is already committed,the transaction will be resolved in background\" ?Idea 3 is proposed to deal with those problems. By having separateprocesses, resolver processes, committing prepared foreigntransactions, we and FDW developers don't need to worry about thosetwo problems.However as Ikeda-san shared the performance results, idea 3 is likelyto have a performance problem since resolver processes can easily bebottle-neck. Moreover, with the current patch, since we asynchronouslycommit foreign prepared transactions, if many concurrent clients use2PC, reaching max_foreign_prepared_transactions, transactions end upwith an error.Through the long discussion on this thread, I've been thought we got aconsensus on idea 3 but sometimes ideas 1 and 2 are proposed again fordealing with the performance problem. Idea 1 and 2 are also good andattractive, but I think we need to deal with the two problems first ifwe go with one of those ideas. To be honest, I'm really not sure it'sgood if we make those things FDW developers responsibility.As long as we commit foreign prepared transactions asynchronously andthere is max_foreign_prepared_transactions limit, it's possible thatcommitting those transactions could not keep up. Maybe the same istrue for a case where the client heavily uses 2PC and asynchronouslycommits prepared transactions. If committing prepared transactionsdoesn't keep up with preparing transactions, the system reachesmax_prepared_transactions.With the current patch, we commit prepared foreign transactionsasynchronously. But maybe we need to compare the performance of ideas1 (and 2) to idea 3 with synchronous foreign transaction resolution.OK, I understood the consensus is 3rd one. I agree it since I don’t have any solutions For the problems related 1st and 2nd. If I find them, I’ll share you.\nRegards,-- Masahiro IkedaNTT DATA CORPORATION",
"msg_date": "Fri, 4 Jun 2021 15:58:47 +0900",
"msg_from": "\"ikedamsh@oss.nttdata.com\" <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n1. the backend continues attempting to commit all prepared foreign\r\n> transactions until all of them are committed.\r\n> 2. the backend attempts to commit all prepared foreign transactions\r\n> once. If an error happens, leave them for the resolver.\r\n> 3. the backend asks the resolver that launched per foreign server to\r\n> commit the prepared foreign transactions (and backend waits or doesn't\r\n> wait for the commit completion depending on the setting).\r\n> \r\n> With ideas 1 and 2, since the backend itself commits all foreign\r\n> transactions the resolver process cannot be a bottleneck, and probably\r\n> the code can get more simple as backends don't need to communicate\r\n> with resolver processes.\r\n> \r\n> However, those have two problems we need to deal with:\r\n> \r\n\r\n> First, users could get an error if an error happens during the backend\r\n> committing prepared foreign transaction but the local transaction is\r\n> already committed and some foreign transactions could also be\r\n> committed, confusing users. There were two opinions to this problem:\r\n> FDW developers should be responsible for writing FDW code such that\r\n> any error doesn't happen during committing foreign transactions, and\r\n> users can accept that confusion since an error could happen after\r\n> writing the commit WAL even today without this 2PC feature. \r\n\r\nWhy does the user have to get an error? Once the local transaction has been prepared, which means all remote ones also have been prepared, the whole transaction is determined to commit. So, the user doesn't have to receive an error as long as the local node is alive.\r\n\r\n\r\n> For the\r\n> former point, I'm not sure it's always doable since even palloc()\r\n> could raise an error and it seems hard to require all FDW developers\r\n> to understand all possible paths of raising an error.\r\n\r\nNo, this is a matter of discipline to ensure consistency, just in case we really have to return an error to the user.\r\n\r\n\r\n> And for the\r\n> latter point, that's true but I think those cases are\r\n> should-not-happen cases (i.g., rare cases) whereas the likelihood of\r\n> an error during committing prepared transactions is not low (e.g., by\r\n> network connectivity problem). I think we need to assume that that is\r\n> not a rare case.\r\n\r\nHow do non-2PC and 2PC cases differ in the rarity of the error?\r\n\r\n\r\n> The second problem is whether we can cancel committing foreign\r\n> transactions by pg_cancel_backend() (or pressing Ctl-c). If the\r\n> backend process commits prepared foreign transactions, it's FDW\r\n> developers' responsibility to write code that is interruptible. I’m\r\n> not sure it’s feasible for drivers for other databases.\r\n\r\nThat's true not only for prepare and commit but also for other queries. Why do we have to treat prepare and commit specially?\r\n\r\n\r\n> Through the long discussion on this thread, I've been thought we got a\r\n> consensus on idea 3 but sometimes ideas 1 and 2 are proposed again for\r\n\r\nI don't remember seeing any consensus yet?\r\n\r\n> With the current patch, we commit prepared foreign transactions\r\n> asynchronously. But maybe we need to compare the performance of ideas\r\n> 1 (and 2) to idea 3 with synchronous foreign transaction resolution.\r\n\r\n+1\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 4 Jun 2021 08:04:27 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 3:58 PM ikedamsh@oss.nttdata.com\n<ikedamsh@oss.nttdata.com> wrote:\n>\n>\n>\n> 2021/06/04 12:28、Masahiko Sawada <sawada.mshk@gmail.com>のメール:\n>\n>\n> Thank you for pointing it out. This idea has been proposed several\n> times and there were discussions. I'd like to summarize the proposed\n> ideas and those pros and cons before replying to your other comments.\n>\n> There are 3 ideas. After backend both prepares all foreign transaction\n> and commit the local transaction,\n>\n> 1. the backend continues attempting to commit all prepared foreign\n> transactions until all of them are committed.\n> 2. the backend attempts to commit all prepared foreign transactions\n> once. If an error happens, leave them for the resolver.\n> 3. the backend asks the resolver that launched per foreign server to\n> commit the prepared foreign transactions (and backend waits or doesn't\n> wait for the commit completion depending on the setting).\n>\n> With ideas 1 and 2, since the backend itself commits all foreign\n> transactions the resolver process cannot be a bottleneck, and probably\n> the code can get more simple as backends don't need to communicate\n> with resolver processes.\n>\n> However, those have two problems we need to deal with:\n>\n>\n> Thanks for sharing the summarize. I understood there are problems related to\n> FDW implementation.\n>\n> First, users could get an error if an error happens during the backend\n> committing prepared foreign transaction but the local transaction is\n> already committed and some foreign transactions could also be\n> committed, confusing users. There were two opinions to this problem:\n> FDW developers should be responsible for writing FDW code such that\n> any error doesn't happen during committing foreign transactions, and\n> users can accept that confusion since an error could happen after\n> writing the commit WAL even today without this 2PC feature. For the\n> former point, I'm not sure it's always doable since even palloc()\n> could raise an error and it seems hard to require all FDW developers\n> to understand all possible paths of raising an error. And for the\n> latter point, that's true but I think those cases are\n> should-not-happen cases (i.g., rare cases) whereas the likelihood of\n> an error during committing prepared transactions is not low (e.g., by\n> network connectivity problem). I think we need to assume that that is\n> not a rare case.\n>\n>\n> Hmm… Sorry, I don’t have any good ideas now.\n>\n> If anything, I’m on second side which users accept the confusion though\n> let users know a error happens before local commit is done or not is necessary\n> because if the former case, users will execute the same query again.\n\nYeah, users will need to remember the XID of the last executed\ntransaction and check if it has been committed by pg_xact_status().\n\n>\n>\n> The second problem is whether we can cancel committing foreign\n> transactions by pg_cancel_backend() (or pressing Ctl-c). If the\n> backend process commits prepared foreign transactions, it's FDW\n> developers' responsibility to write code that is interruptible. I’m\n> not sure it’s feasible for drivers for other databases.\n>\n>\n> Sorry, my understanding is not clear.\n>\n> After all prepares are done, the foreign transactions will be committed.\n> So, does this mean that FDW must leave the unresolved transaction to the transaction\n> resolver and show some messages like “Since the transaction is already committed,\n> the transaction will be resolved in background\" ?\n\nI think this would happen after the backend cancels COMMIT PREPARED.\nTo be able to cancel an in-progress query the backend needs to accept\nthe interruption and send the cancel request. postgres_fdw can do that\nsince libpq supports sending a query and waiting for the result but\nI’m not sure about other drivers.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 4 Jun 2021 17:16:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 5:04 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> 1. the backend continues attempting to commit all prepared foreign\n> > transactions until all of them are committed.\n> > 2. the backend attempts to commit all prepared foreign transactions\n> > once. If an error happens, leave them for the resolver.\n> > 3. the backend asks the resolver that launched per foreign server to\n> > commit the prepared foreign transactions (and backend waits or doesn't\n> > wait for the commit completion depending on the setting).\n> >\n> > With ideas 1 and 2, since the backend itself commits all foreign\n> > transactions the resolver process cannot be a bottleneck, and probably\n> > the code can get more simple as backends don't need to communicate\n> > with resolver processes.\n> >\n> > However, those have two problems we need to deal with:\n> >\n>\n> > First, users could get an error if an error happens during the backend\n> > committing prepared foreign transaction but the local transaction is\n> > already committed and some foreign transactions could also be\n> > committed, confusing users. There were two opinions to this problem:\n> > FDW developers should be responsible for writing FDW code such that\n> > any error doesn't happen during committing foreign transactions, and\n> > users can accept that confusion since an error could happen after\n> > writing the commit WAL even today without this 2PC feature.\n>\n> Why does the user have to get an error? Once the local transaction has been prepared, which means all remote ones also have been prepared, the whole transaction is determined to commit. So, the user doesn't have to receive an error as long as the local node is alive.\n\nI think we should neither ignore the error thrown by FDW code nor\nlower the error level (e.g., ERROR to WARNING).\n\n>\n> > And for the\n> > latter point, that's true but I think those cases are\n> > should-not-happen cases (i.g., rare cases) whereas the likelihood of\n> > an error during committing prepared transactions is not low (e.g., by\n> > network connectivity problem). I think we need to assume that that is\n> > not a rare case.\n>\n> How do non-2PC and 2PC cases differ in the rarity of the error?\n\nI think the main difference would be that in 2PC case there will be\nnetwork communications possibly with multiple servers after the local\ncommit.\n\n>\n>\n> > The second problem is whether we can cancel committing foreign\n> > transactions by pg_cancel_backend() (or pressing Ctl-c). If the\n> > backend process commits prepared foreign transactions, it's FDW\n> > developers' responsibility to write code that is interruptible. I’m\n> > not sure it’s feasible for drivers for other databases.\n>\n> That's true not only for prepare and commit but also for other queries. Why do we have to treat prepare and commit specially?\n\nGood point. This would not be a blocker for ideas 1 and 2 but is a\nside benefit of idea 3.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 4 Jun 2021 17:43:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> On Fri, Jun 4, 2021 at 5:04 PM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > Why does the user have to get an error? Once the local transaction has been\r\n> prepared, which means all remote ones also have been prepared, the whole\r\n> transaction is determined to commit. So, the user doesn't have to receive an\r\n> error as long as the local node is alive.\r\n> \r\n> I think we should neither ignore the error thrown by FDW code nor\r\n> lower the error level (e.g., ERROR to WARNING).\r\n\r\nWhy? (Forgive me for asking relentlessly... by imagining me as a cute 7-year-old boy/girl asking \"Why Dad?\")\r\n\r\n\r\n> > How do non-2PC and 2PC cases differ in the rarity of the error?\r\n> \r\n> I think the main difference would be that in 2PC case there will be\r\n> network communications possibly with multiple servers after the local\r\n> commit.\r\n\r\nThen, it's the same failure mode. That is, the same failure could occur for both cases. That doesn't require us to differentiate between them. Let's ignore this point from now on.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 4 Jun 2021 08:59:29 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 5:59 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> > On Fri, Jun 4, 2021 at 5:04 PM tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > > Why does the user have to get an error? Once the local transaction has been\n> > prepared, which means all remote ones also have been prepared, the whole\n> > transaction is determined to commit. So, the user doesn't have to receive an\n> > error as long as the local node is alive.\n> >\n> > I think we should neither ignore the error thrown by FDW code nor\n> > lower the error level (e.g., ERROR to WARNING).\n>\n> Why? (Forgive me for asking relentlessly... by imagining me as a cute 7-year-old boy/girl asking \"Why Dad?\")\n\nI think we should not reinterpret the severity of the error and lower\nit. Especially, in this case, any kind of errors can be thrown. It\ncould be such a serious error that FDW developer wants to report to\nthe client. Do we lower even PANIC to a lower severity such as\nWARNING? That's definitely a bad idea. If we don’t lower PANIC whereas\nlowering ERROR (and FATAL) to WARNING, why do we regard only them as\nnon-error?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 4 Jun 2021 20:08:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 5:16 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jun 4, 2021 at 3:58 PM ikedamsh@oss.nttdata.com\n> <ikedamsh@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > 2021/06/04 12:28、Masahiko Sawada <sawada.mshk@gmail.com>のメール:\n> >\n> >\n> > Thank you for pointing it out. This idea has been proposed several\n> > times and there were discussions. I'd like to summarize the proposed\n> > ideas and those pros and cons before replying to your other comments.\n> >\n> > There are 3 ideas. After backend both prepares all foreign transaction\n> > and commit the local transaction,\n> >\n> > 1. the backend continues attempting to commit all prepared foreign\n> > transactions until all of them are committed.\n> > 2. the backend attempts to commit all prepared foreign transactions\n> > once. If an error happens, leave them for the resolver.\n> > 3. the backend asks the resolver that launched per foreign server to\n> > commit the prepared foreign transactions (and backend waits or doesn't\n> > wait for the commit completion depending on the setting).\n> >\n> > With ideas 1 and 2, since the backend itself commits all foreign\n> > transactions the resolver process cannot be a bottleneck, and probably\n> > the code can get more simple as backends don't need to communicate\n> > with resolver processes.\n> >\n> > However, those have two problems we need to deal with:\n> >\n> >\n> > Thanks for sharing the summarize. I understood there are problems related to\n> > FDW implementation.\n> >\n> > First, users could get an error if an error happens during the backend\n> > committing prepared foreign transaction but the local transaction is\n> > already committed and some foreign transactions could also be\n> > committed, confusing users. There were two opinions to this problem:\n> > FDW developers should be responsible for writing FDW code such that\n> > any error doesn't happen during committing foreign transactions, and\n> > users can accept that confusion since an error could happen after\n> > writing the commit WAL even today without this 2PC feature. For the\n> > former point, I'm not sure it's always doable since even palloc()\n> > could raise an error and it seems hard to require all FDW developers\n> > to understand all possible paths of raising an error. And for the\n> > latter point, that's true but I think those cases are\n> > should-not-happen cases (i.g., rare cases) whereas the likelihood of\n> > an error during committing prepared transactions is not low (e.g., by\n> > network connectivity problem). I think we need to assume that that is\n> > not a rare case.\n> >\n> >\n> > Hmm… Sorry, I don’t have any good ideas now.\n> >\n> > If anything, I’m on second side which users accept the confusion though\n> > let users know a error happens before local commit is done or not is necessary\n> > because if the former case, users will execute the same query again.\n>\n> Yeah, users will need to remember the XID of the last executed\n> transaction and check if it has been committed by pg_xact_status().\n\nAs the second idea, can we send something like a hint along with the\nerror (or send a new type of error) that indicates the error happened\nafter the transaction commit so that the client can decide whether or\nnot to ignore the error? That way, we can deal with the confusion led\nby an error raised after the local commit by the existing post-commit\ncleanup routines (and post-commit xact callbacks) as well as by FDW’s\ncommit prepared routine.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 4 Jun 2021 21:38:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\n> 2021/06/04 17:16、Masahiko Sawada <sawada.mshk@gmail.com>のメール:\n> \n> On Fri, Jun 4, 2021 at 3:58 PM ikedamsh@oss.nttdata.com\n> <ikedamsh@oss.nttdata.com> wrote:\n>> \n>> \n>> \n>> 2021/06/04 12:28、Masahiko Sawada <sawada.mshk@gmail.com>のメール:\n>> \n>> \n>> Thank you for pointing it out. This idea has been proposed several\n>> times and there were discussions. I'd like to summarize the proposed\n>> ideas and those pros and cons before replying to your other comments.\n>> \n>> There are 3 ideas. After backend both prepares all foreign transaction\n>> and commit the local transaction,\n>> \n>> 1. the backend continues attempting to commit all prepared foreign\n>> transactions until all of them are committed.\n>> 2. the backend attempts to commit all prepared foreign transactions\n>> once. If an error happens, leave them for the resolver.\n>> 3. the backend asks the resolver that launched per foreign server to\n>> commit the prepared foreign transactions (and backend waits or doesn't\n>> wait for the commit completion depending on the setting).\n>> \n>> With ideas 1 and 2, since the backend itself commits all foreign\n>> transactions the resolver process cannot be a bottleneck, and probably\n>> the code can get more simple as backends don't need to communicate\n>> with resolver processes.\n>> \n>> However, those have two problems we need to deal with:\n>> \n>> \n>> Thanks for sharing the summarize. I understood there are problems related to\n>> FDW implementation.\n>> \n>> First, users could get an error if an error happens during the backend\n>> committing prepared foreign transaction but the local transaction is\n>> already committed and some foreign transactions could also be\n>> committed, confusing users. There were two opinions to this problem:\n>> FDW developers should be responsible for writing FDW code such that\n>> any error doesn't happen during committing foreign transactions, and\n>> users can accept that confusion since an error could happen after\n>> writing the commit WAL even today without this 2PC feature. For the\n>> former point, I'm not sure it's always doable since even palloc()\n>> could raise an error and it seems hard to require all FDW developers\n>> to understand all possible paths of raising an error. And for the\n>> latter point, that's true but I think those cases are\n>> should-not-happen cases (i.g., rare cases) whereas the likelihood of\n>> an error during committing prepared transactions is not low (e.g., by\n>> network connectivity problem). I think we need to assume that that is\n>> not a rare case.\n>> \n>> \n>> Hmm… Sorry, I don’t have any good ideas now.\n>> \n>> If anything, I’m on second side which users accept the confusion though\n>> let users know a error happens before local commit is done or not is necessary\n>> because if the former case, users will execute the same query again.\n> \n> Yeah, users will need to remember the XID of the last executed\n> transaction and check if it has been committed by pg_xact_status().\n> \n>> \n>> \n>> The second problem is whether we can cancel committing foreign\n>> transactions by pg_cancel_backend() (or pressing Ctl-c). If the\n>> backend process commits prepared foreign transactions, it's FDW\n>> developers' responsibility to write code that is interruptible. I’m\n>> not sure it’s feasible for drivers for other databases.\n>> \n>> \n>> Sorry, my understanding is not clear.\n>> \n>> After all prepares are done, the foreign transactions will be committed.\n>> So, does this mean that FDW must leave the unresolved transaction to the transaction\n>> resolver and show some messages like “Since the transaction is already committed,\n>> the transaction will be resolved in background\" ?\n> \n> I think this would happen after the backend cancels COMMIT PREPARED.\n> To be able to cancel an in-progress query the backend needs to accept\n> the interruption and send the cancel request. postgres_fdw can do that\n> since libpq supports sending a query and waiting for the result but\n> I’m not sure about other drivers.\n\nThanks, I understood that handling this issue is not scope of the 2PC feature \nas Tsunakawa-san and you said, \n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Mon, 7 Jun 2021 09:35:59 +0900",
"msg_from": "\"ikedamsh@oss.nttdata.com\" <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\n> 2021/06/04 21:38、Masahiko Sawada <sawada.mshk@gmail.com>のメール:\n> \n> On Fri, Jun 4, 2021 at 5:16 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> \n>> On Fri, Jun 4, 2021 at 3:58 PM ikedamsh@oss.nttdata.com\n>> <ikedamsh@oss.nttdata.com> wrote:\n>>> \n>>> \n>>> \n>>> 2021/06/04 12:28、Masahiko Sawada <sawada.mshk@gmail.com>のメール:\n>>> \n>>> \n>>> Thank you for pointing it out. This idea has been proposed several\n>>> times and there were discussions. I'd like to summarize the proposed\n>>> ideas and those pros and cons before replying to your other comments.\n>>> \n>>> There are 3 ideas. After backend both prepares all foreign transaction\n>>> and commit the local transaction,\n>>> \n>>> 1. the backend continues attempting to commit all prepared foreign\n>>> transactions until all of them are committed.\n>>> 2. the backend attempts to commit all prepared foreign transactions\n>>> once. If an error happens, leave them for the resolver.\n>>> 3. the backend asks the resolver that launched per foreign server to\n>>> commit the prepared foreign transactions (and backend waits or doesn't\n>>> wait for the commit completion depending on the setting).\n>>> \n>>> With ideas 1 and 2, since the backend itself commits all foreign\n>>> transactions the resolver process cannot be a bottleneck, and probably\n>>> the code can get more simple as backends don't need to communicate\n>>> with resolver processes.\n>>> \n>>> However, those have two problems we need to deal with:\n>>> \n>>> \n>>> Thanks for sharing the summarize. I understood there are problems related to\n>>> FDW implementation.\n>>> \n>>> First, users could get an error if an error happens during the backend\n>>> committing prepared foreign transaction but the local transaction is\n>>> already committed and some foreign transactions could also be\n>>> committed, confusing users. There were two opinions to this problem:\n>>> FDW developers should be responsible for writing FDW code such that\n>>> any error doesn't happen during committing foreign transactions, and\n>>> users can accept that confusion since an error could happen after\n>>> writing the commit WAL even today without this 2PC feature. For the\n>>> former point, I'm not sure it's always doable since even palloc()\n>>> could raise an error and it seems hard to require all FDW developers\n>>> to understand all possible paths of raising an error. And for the\n>>> latter point, that's true but I think those cases are\n>>> should-not-happen cases (i.g., rare cases) whereas the likelihood of\n>>> an error during committing prepared transactions is not low (e.g., by\n>>> network connectivity problem). I think we need to assume that that is\n>>> not a rare case.\n>>> \n>>> \n>>> Hmm… Sorry, I don’t have any good ideas now.\n>>> \n>>> If anything, I’m on second side which users accept the confusion though\n>>> let users know a error happens before local commit is done or not is necessary\n>>> because if the former case, users will execute the same query again.\n>> \n>> Yeah, users will need to remember the XID of the last executed\n>> transaction and check if it has been committed by pg_xact_status().\n> \n> As the second idea, can we send something like a hint along with the\n> error (or send a new type of error) that indicates the error happened\n> after the transaction commit so that the client can decide whether or\n> not to ignore the error? That way, we can deal with the confusion led\n> by an error raised after the local commit by the existing post-commit\n> cleanup routines (and post-commit xact callbacks) as well as by FDW’s\n> commit prepared routine.\n\n\nI think your second idea is better because it’s easier for users to know what \nerror happens and there is nothing users should do. Since the focus of \"hint” \nis how to fix the problem, is it appropriate to use \"context”? \n\nFWIF, I took a fast look to elog.c and I found there is “error_context_stack”. \nSo, why don’t you add the context which shows like \"the transaction fate is \ndecided to COMMIT (or ROLLBACK). So, even if error happens, the transaction \nwill be resolved in background” after the local commit?\n\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Mon, 7 Jun 2021 09:57:45 +0900",
"msg_from": "\"ikedamsh@oss.nttdata.com\" <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> I think we should not reinterpret the severity of the error and lower\r\n> it. Especially, in this case, any kind of errors can be thrown. It\r\n> could be such a serious error that FDW developer wants to report to\r\n> the client. Do we lower even PANIC to a lower severity such as\r\n> WARNING? That's definitely a bad idea. If we don’t lower PANIC whereas\r\n> lowering ERROR (and FATAL) to WARNING, why do we regard only them as\r\n> non-error?\r\n\r\nWhy does the client have to know the error on a remote server, whereas the global transaction itself is destined to commit?\r\n\r\nFYI, the tx_commit() in the X/Open TX interface and the UserTransaction.commit() in JTA don't return such an error, IIRC. Do TX_FAIL and SystemException serve such a purpose? I don't feel like that.\r\n\r\n\r\n[Tuxedo manual (Japanese)]\r\nhttps://docs.oracle.com/cd/F25597_01/document/products/tuxedo/tux80j/atmi/rf3c91.htm\r\n\r\n\r\n[JTA]\r\npublic interface javax.transaction.UserTransaction \r\npublic void commit()\r\n throws RollbackException, HeuristicMixedException, \r\nHeuristicRollbackException, SecurityException, \r\nIllegalStateException, SystemException \r\n\r\nThrows: RollbackException \r\nThrown to indicate that the transaction has been rolled back rather than committed. \r\n\r\nThrows: HeuristicMixedException \r\nThrown to indicate that a heuristic decision was made and that some relevant updates have been \r\ncommitted while others have been rolled back. \r\n\r\nThrows: HeuristicRollbackException \r\nThrown to indicate that a heuristic decision was made and that all relevant updates have been rolled \r\nback. \r\n\r\nThrows: SecurityException \r\nThrown to indicate that the thread is not allowed to commit the transaction. \r\n\r\nThrows: IllegalStateException \r\nThrown if the current thread is not associated with a transaction. \r\n\r\nThrows: SystemException \r\nThrown if the transaction manager encounters an unexpected error condition. \r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 8 Jun 2021 00:47:08 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jun 8, 2021 at 9:47 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> > I think we should not reinterpret the severity of the error and lower\n> > it. Especially, in this case, any kind of errors can be thrown. It\n> > could be such a serious error that FDW developer wants to report to\n> > the client. Do we lower even PANIC to a lower severity such as\n> > WARNING? That's definitely a bad idea. If we don’t lower PANIC whereas\n> > lowering ERROR (and FATAL) to WARNING, why do we regard only them as\n> > non-error?\n>\n> Why does the client have to know the error on a remote server, whereas the global transaction itself is destined to commit?\n\nIt's not necessarily on a remote server. It could be a problem with\nthe local server.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 8 Jun 2021 16:32:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "(I have caught up here. Sorry in advance for possible pointless\r\ndiscussion by me..)\r\n\r\nAt Tue, 8 Jun 2021 00:47:08 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \r\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> > I think we should not reinterpret the severity of the error and lower\r\n> > it. Especially, in this case, any kind of errors can be thrown. It\r\n> > could be such a serious error that FDW developer wants to report to\r\n> > the client. Do we lower even PANIC to a lower severity such as\r\n> > WARNING? That's definitely a bad idea. If we don’t lower PANIC whereas\r\n> > lowering ERROR (and FATAL) to WARNING, why do we regard only them as\r\n> > non-error?\r\n> \r\n> Why does the client have to know the error on a remote server, whereas the global transaction itself is destined to commit?\r\n\r\nI think the discussion is based the behavior that any process that is\r\nresponsible for finishing the 2pc-commit continue retrying remote\r\ncommits until all of the remote-commits succeed.\r\n\r\nMaybe in most cases the errors duing remote-prepared-commit could be\r\nretry-able but as Sawada-san says I'm also not sure it's always the\r\ncase. On the other hand, it could be said that we have no other way\r\nthan retrying the remote-commits if we want to get over, say, instant\r\nnetwork failures automatically. It is somewhat similar to\r\nWAL-restoration that continues complaining for recovery_commands\r\nfailure without exiting.\r\n\r\n> FYI, the tx_commit() in the X/Open TX interface and the UserTransaction.commit() in JTA don't return such an error, IIRC. Do TX_FAIL and SystemException serve such a purpose? I don't feel like that.\r\n\r\nI'm not sure about how JTA works in detail, but doesn't\r\nUserTransaction.commit() return HeuristicMixedExcpetion when some of\r\nrelevant updates have been committed but other not? Isn't it the same\r\nstate with the case where some of the remote servers failed on\r\nremote-commit while others are succeeded? (I guess that\r\nUserTransaction.commit() would throw RollbackException if\r\nremote-prepare has been failed for any of the remotes.)\r\n\r\n\r\n> [Tuxedo manual (Japanese)]\r\n> https://docs.oracle.com/cd/F25597_01/document/products/tuxedo/tux80j/atmi/rf3c91.htm\r\n> \r\n> \r\n> [JTA]\r\n> public interface javax.transaction.UserTransaction \r\n> public void commit()\r\n> throws RollbackException, HeuristicMixedException, \r\n> HeuristicRollbackException, SecurityException, \r\n> IllegalStateException, SystemException \r\n> \r\n> Throws: RollbackException \r\n> Thrown to indicate that the transaction has been rolled back rather than committed. \r\n> \r\n> Throws: HeuristicMixedException \r\n> Thrown to indicate that a heuristic decision was made and that some relevant updates have been \r\n> committed while others have been rolled back. \r\n> \r\n> Throws: HeuristicRollbackException \r\n> Thrown to indicate that a heuristic decision was made and that all relevant updates have been rolled \r\n> back. \r\n> \r\n> Throws: SecurityException \r\n> Thrown to indicate that the thread is not allowed to commit the transaction. \r\n> \r\n> Throws: IllegalStateException \r\n> Thrown if the current thread is not associated with a transaction. \r\n> \r\n> Throws: SystemException \r\n> Thrown if the transaction manager encounters an unexpected error condition. \r\n> \r\n> \r\n> Regards\r\n> Takayuki Tsunakawa\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 08 Jun 2021 17:09:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "At Tue, 8 Jun 2021 16:32:14 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \r\n> On Tue, Jun 8, 2021 at 9:47 AM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> >\r\n> > From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> > > I think we should not reinterpret the severity of the error and lower\r\n> > > it. Especially, in this case, any kind of errors can be thrown. It\r\n> > > could be such a serious error that FDW developer wants to report to\r\n> > > the client. Do we lower even PANIC to a lower severity such as\r\n> > > WARNING? That's definitely a bad idea. If we don’t lower PANIC whereas\r\n> > > lowering ERROR (and FATAL) to WARNING, why do we regard only them as\r\n> > > non-error?\r\n> >\r\n> > Why does the client have to know the error on a remote server, whereas the global transaction itself is destined to commit?\r\n> \r\n> It's not necessarily on a remote server. It could be a problem with\r\n> the local server.\r\n\r\nIsn't it a discussion about the errors from postgres_fdw?\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 08 Jun 2021 17:11:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> On Tue, Jun 8, 2021 at 9:47 AM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > Why does the client have to know the error on a remote server, whereas the\r\n> global transaction itself is destined to commit?\r\n> \r\n> It's not necessarily on a remote server. It could be a problem with\r\n> the local server.\r\n\r\nThen, in what kind of scenario are we talking about the difficulty, and how is it difficult to handle, when we adopt either the method 1 or 2? (I'd just like to have the same clear picture.) For example,\r\n\r\n1. All FDWs prepared successfully.\r\n2. The local transaction prepared successfully, too.\r\n3. Some FDWs committed successfully.\r\n4. One FDW failed to send the commit request because the remote server went down.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 8 Jun 2021 08:28:32 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\n> I think the discussion is based the behavior that any process that is\r\n> responsible for finishing the 2pc-commit continue retrying remote\r\n> commits until all of the remote-commits succeed.\r\n\r\nThank you for coming back. We're talking about the first attempt to prepare and commit in each transaction, not the retry case.\r\n\r\n\r\n> > Throws: HeuristicMixedException\r\n> > Thrown to indicate that a heuristic decision was made and that some\r\n> relevant updates have been\r\n> > committed while others have been rolled back.\r\n\r\n> I'm not sure about how JTA works in detail, but doesn't\r\n> UserTransaction.commit() return HeuristicMixedExcpetion when some of\r\n> relevant updates have been committed but other not? Isn't it the same\r\n> state with the case where some of the remote servers failed on\r\n> remote-commit while others are succeeded?\r\n\r\nNo. Taking the description literally and considering the relevant XA specification, it's not about the remote commit failure. The remote server is not allowed to fail the commit once it has reported successful prepare, which is the contract of 2PC. HeuristicMixedException is about the manual resolution, typically by the DBA, using the DBMS-specific tool or the standard commit()/rollback() API.\r\n\r\n\r\n> (I guess that\r\n> UserTransaction.commit() would throw RollbackException if\r\n> remote-prepare has been failed for any of the remotes.)\r\n\r\nCorrect.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 8 Jun 2021 08:45:24 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jun 8, 2021 at 5:28 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> > On Tue, Jun 8, 2021 at 9:47 AM tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > > Why does the client have to know the error on a remote server, whereas the\n> > global transaction itself is destined to commit?\n> >\n> > It's not necessarily on a remote server. It could be a problem with\n> > the local server.\n>\n> Then, in what kind of scenario are we talking about the difficulty, and how is it difficult to handle, when we adopt either the method 1 or 2? (I'd just like to have the same clear picture.)\n\nIMO, even though FDW's commit/rollback transaction code could be\nsimple in some cases, I think we need to think that any kind of errors\n(or even FATAL or PANIC) could be thrown from the FDW code. It could\nbe an error due to a temporary network problem, remote server down,\ndriver’s unexpected error, or out of memory etc. Errors that happened\nafter the local transaction commit doesn't affect the global\ntransaction decision, as you mentioned. But the proccess or system\ncould be in a bad state. Also, users might expect the process to exit\non error by setting exit_on_error = on. Your idea sounds like that we\nhave to ignore any errors happening after the local commit if they\ndon’t affect the transaction outcome. It’s too scary to me and I think\nthat it's a bad idea to blindly ignore all possible errors under such\nconditions. That could make the thing worse and will likely be\nfoot-gun. It would be good if we can prove that it’s safe to ignore\nthose errors but not sure how we can at least for me.\n\nThis situation is true even today; an error could happen after\ncommitting the transaction. But I personally don’t want to add the\ncode that increases the likelihood.\n\nJust to be clear, with your idea, we will ignore only ERROR or also\nFATAL and PANIC? And if an error happens during committing one of the\nprepared transactions on the foreign server, will we proceed with\ncommitting other transactions or return OK to the client?\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 9 Jun 2021 13:26:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> On Tue, Jun 8, 2021 at 5:28 PM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > Then, in what kind of scenario are we talking about the difficulty, and how is\r\n> it difficult to handle, when we adopt either the method 1 or 2? (I'd just like to\r\n> have the same clear picture.)\r\n> \r\n> IMO, even though FDW's commit/rollback transaction code could be\r\n> simple in some cases, I think we need to think that any kind of errors\r\n> (or even FATAL or PANIC) could be thrown from the FDW code. It could\r\n> be an error due to a temporary network problem, remote server down,\r\n> driver’s unexpected error, or out of memory etc. Errors that happened\r\n> after the local transaction commit doesn't affect the global\r\n> transaction decision, as you mentioned. But the proccess or system\r\n> could be in a bad state. Also, users might expect the process to exit\r\n> on error by setting exit_on_error = on. Your idea sounds like that we\r\n> have to ignore any errors happening after the local commit if they\r\n> don’t affect the transaction outcome. It’s too scary to me and I think\r\n> that it's a bad idea to blindly ignore all possible errors under such\r\n> conditions. That could make the thing worse and will likely be\r\n> foot-gun. It would be good if we can prove that it’s safe to ignore\r\n> those errors but not sure how we can at least for me.\r\n> \r\n> This situation is true even today; an error could happen after\r\n> committing the transaction. But I personally don’t want to add the\r\n> code that increases the likelihood.\r\n\r\nI'm not talking about the code simplicity here (actually, I haven't reviewed the code around prepare and commit in the patch yet...) Also, I don't understand well what you're trying to insist and what realistic situations you have in mind by citing exit_on_error, FATAL, PANIC and so on. I just asked (in a different part) why the client has to know the error.\r\n\r\nJust to be clear, I'm not saying that we should hide the error completely behind the scenes. For example, you can allow the FDW to emit a WARNING if the DBMS-specific client driver returns an error when committing. Further, if you want to allow the FDW to throw an ERROR when committing, the transaction manager in core can catch it by PG_TRY(), so that it can report back successfull commit of the global transaction to the client while it leaves the handling of failed commit of the FDW to the resolver. (I don't think we like to use PG_TRY() during transaction commit for performance reasons, though.)\r\n\r\nLet's give it a hundred steps and let's say we want to report the error of the committing FDW to the client. If that's the case, we can use SQLSTATE 02xxx (Warning) and attach the error message.\r\n\r\n\r\n> Just to be clear, with your idea, we will ignore only ERROR or also\r\n> FATAL and PANIC? And if an error happens during committing one of the\r\n> prepared transactions on the foreign server, will we proceed with\r\n> committing other transactions or return OK to the client?\r\n\r\nNeither FATAL nor PANIC can be ignored. When FATAL, which means the termination of a particular session, the committing of the remote transaction should be taken over by the resolver. Not to mention PANIC; we can't do anything. Otherwise, we proceed with committing other FDWs, hand off the task of committing the failed FDW to the resolver, and report success to the client. If you're not convinced, I'd like to ask you to investigate the code of some Java EE app server, say GlassFish, and share with us how it handles an error during commit.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 9 Jun 2021 07:10:48 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 4:10 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> > On Tue, Jun 8, 2021 at 5:28 PM tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > > Then, in what kind of scenario are we talking about the difficulty, and how is\n> > it difficult to handle, when we adopt either the method 1 or 2? (I'd just like to\n> > have the same clear picture.)\n> >\n> > IMO, even though FDW's commit/rollback transaction code could be\n> > simple in some cases, I think we need to think that any kind of errors\n> > (or even FATAL or PANIC) could be thrown from the FDW code. It could\n> > be an error due to a temporary network problem, remote server down,\n> > driver’s unexpected error, or out of memory etc. Errors that happened\n> > after the local transaction commit doesn't affect the global\n> > transaction decision, as you mentioned. But the proccess or system\n> > could be in a bad state. Also, users might expect the process to exit\n> > on error by setting exit_on_error = on. Your idea sounds like that we\n> > have to ignore any errors happening after the local commit if they\n> > don’t affect the transaction outcome. It’s too scary to me and I think\n> > that it's a bad idea to blindly ignore all possible errors under such\n> > conditions. That could make the thing worse and will likely be\n> > foot-gun. It would be good if we can prove that it’s safe to ignore\n> > those errors but not sure how we can at least for me.\n> >\n> > This situation is true even today; an error could happen after\n> > committing the transaction. But I personally don’t want to add the\n> > code that increases the likelihood.\n>\n> I'm not talking about the code simplicity here (actually, I haven't reviewed the code around prepare and commit in the patch yet...) Also, I don't understand well what you're trying to insist and what realistic situations you have in mind by citing exit_on_error, FATAL, PANIC and so on. I just asked (in a different part) why the client has to know the error.\n>\n> Just to be clear, I'm not saying that we should hide the error completely behind the scenes. For example, you can allow the FDW to emit a WARNING if the DBMS-specific client driver returns an error when committing. Further, if you want to allow the FDW to throw an ERROR when committing, the transaction manager in core can catch it by PG_TRY(), so that it can report back successfull commit of the global transaction to the client while it leaves the handling of failed commit of the FDW to the resolver. (I don't think we like to use PG_TRY() during transaction commit for performance reasons, though.)\n>\n> Let's give it a hundred steps and let's say we want to report the error of the committing FDW to the client. If that's the case, we can use SQLSTATE 02xxx (Warning) and attach the error message.\n>\n\nMaybe it's better to start a new thread to discuss this topic. If your\nidea is good, we can lower all error that happened after writing the\ncommit record to warning, reducing the cases where the client gets\nconfusion by receiving an error after the commit.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 9 Jun 2021 16:25:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> Maybe it's better to start a new thread to discuss this topic. If your\r\n> idea is good, we can lower all error that happened after writing the\r\n> commit record to warning, reducing the cases where the client gets\r\n> confusion by receiving an error after the commit.\r\n\r\nNo. It's an important part because it determines the 2PC behavior and performance. This discussion had started from the concern about performance before Ikeda-san reported pathological results. Don't rush forward, hoping someone will commit the current patch. I'm afraid you just don't want to change your design and code. Let's face the real issue.\r\n\r\nAs I said before, and as Ikeda-san's performance benchmark results show, I have to say the design isn't done sufficiently. I talked with Fujii-san the other day about this patch. The patch is already huge and it's difficult to decode how the patch works, e.g., what kind of new WALs it emits, how many disk writes it adds, how the error is handled, whether/how it's different from the textbook or other existing designs, etc. What happend to my request to add such design description to the following page, so that reviewers can consider the design before spending much time on looking at the code? What's the situation of the new FDW API that should naturally accommodate other FDW implementations?\r\n\r\nAtomic Commit of Distributed Transactions\r\nhttps://wiki.postgresql.org/wiki/Atomic_Commit_of_Distributed_Transactions\r\n\r\nDesign should come first. I don't think it's a sincere attitude to require reviewers to spend long time to read the design from huge code.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n",
"msg_date": "Wed, 9 Jun 2021 08:07:40 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "At Tue, 8 Jun 2021 08:45:24 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > I think the discussion is based the behavior that any process that is\n> > responsible for finishing the 2pc-commit continue retrying remote\n> > commits until all of the remote-commits succeed.\n> \n> Thank you for coming back. We're talking about the first attempt to prepare and commit in each transaction, not the retry case.\n\nIf we accept each elementary-commit (via FDW connection) to fail, the\nparent(?) there's no way the root 2pc-commit can succeed. How can we\nignore the fdw-error in that case?\n\n> > > Throws: HeuristicMixedException\n> > > Thrown to indicate that a heuristic decision was made and that some\n> > relevant updates have been\n> > > committed while others have been rolled back.\n> \n> > I'm not sure about how JTA works in detail, but doesn't\n> > UserTransaction.commit() return HeuristicMixedExcpetion when some of\n> > relevant updates have been committed but other not? Isn't it the same\n> > state with the case where some of the remote servers failed on\n> > remote-commit while others are succeeded?\n> \n> No. Taking the description literally and considering the relevant XA specification, it's not about the remote commit failure. The remote server is not allowed to fail the commit once it has reported successful prepare, which is the contract of 2PC. HeuristicMixedException is about the manual resolution, typically by the DBA, using the DBMS-specific tool or the standard commit()/rollback() API.\n\nMmm. The above seems as if saying that 2pc-comit does not interact\nwith remotes. The interface contract does not cover everything that\nhappens in the real world. If remote-commit fails, that is just an\nissue outside of the 2pc world. In reality remote-commit may fail for\nall reasons.\n\nhttps://www.ibm.com/docs/ja/db2-for-zos/11?topic=support-example-distributed-transaction-that-uses-jta-methods\n\n> } catch (javax.transaction.xa.XAException xae)\n> { // Distributed transaction failed, so roll it back.\n> // Report XAException on prepare/commit.\n\nThis suggests that both XAResoruce.prepare() and commit() can throw a\nexception.\n\n> > (I guess that\n> > UserTransaction.commit() would throw RollbackException if\n> > remote-prepare has been failed for any of the remotes.)\n> \n> Correct.\n\nSo UserTransaction.commit() does not throw the same exception if\nremote-commit fails. Isn't the HeuristicMixedExcpetion the exception\nthrown in that case?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 10 Jun 2021 11:04:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take\n 2"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> If we accept each elementary-commit (via FDW connection) to fail, the\n> parent(?) there's no way the root 2pc-commit can succeed. How can we\n> ignore the fdw-error in that case?\n\nNo, we don't ignore the error during FDW commit. As mentioned at the end of this mail, the question is how the FDW reports the eror to the caller (transaction manager in Postgres core), and how we should handle it.\n\nAs below, Glassfish catches the resource manager's error during commit, retries the commit if the error is transient or communication failure, and hands off the processing of failed commit to the recovery manager. (I used all of my energy today; I'd be grateful if someone could figure out whether Glassfish reports the error to the application.)\n\n\n[XATerminatorImpl.java]\n public void commit(Xid xid, boolean onePhase) throws XAException {\n...\n } else {\n coord.commit();\n }\n\n\n[TopCoordinator.java]\n // Commit all participants. If a fatal error occurs during\n // this method, then the process must be ended with a fatal error.\n...\n try {\n participants.distributeCommit();\n } catch (Throwable exc) {\n\n\n[RegisteredResources.java]\n void distributeCommit() throws HeuristicMixed, HeuristicHazard, NotPrepared {\n...\n // Browse through the participants, committing them. The following is\n // intended to be done asynchronously as a group of operations.\n...\n // Tell the resource to commit.\n // Catch any exceptions here; keep going until\n // no exception is left.\n...\n // If the exception is neither TRANSIENT or\n // COMM_FAILURE, it is unexpected, so display a\n // message and give up with this Resource.\n...\n // For TRANSIENT or COMM_FAILURE, wait\n // for a while, then retry the commit.\n...\n // If the retry limit has been exceeded,\n // end the process with a fatal error.\n...\n if (!transactionCompleted) {\n if (coord != null)\n RecoveryManager.addToIncompleTx(coord, true);\n\n\n> > No. Taking the description literally and considering the relevant XA\n> specification, it's not about the remote commit failure. The remote server is\n> not allowed to fail the commit once it has reported successful prepare, which is\n> the contract of 2PC. HeuristicMixedException is about the manual resolution,\n> typically by the DBA, using the DBMS-specific tool or the standard\n> commit()/rollback() API.\n> \n> Mmm. The above seems as if saying that 2pc-comit does not interact\n> with remotes. The interface contract does not cover everything that\n> happens in the real world. If remote-commit fails, that is just an\n> issue outside of the 2pc world. In reality remote-commit may fail for\n> all reasons.\n\nThe following part of XA specification is relevant. We're considering to model the FDW 2PC interface based on XA, because it seems like the only standard interface and thus other FDWS would naturally take advantage of, aren't we? Then, we need to take care of such things as this. The interface design is not easy. So, proper design and its review should come first, before going deeper into the huge code patch.\n\n2.3.3 Heuristic Branch Completion \n--------------------------------------------------\nSome RMs may employ heuristic decision-making: an RM that has prepared to \ncommit a transaction branch may decide to commit or roll back its work independently \nof the TM. It could then unlock shared resources. This may leave them in an \ninconsistent state. When the TM ultimately directs an RM to complete the branch, the \nRM may respond that it has already done so. The RM reports whether it committed \nthe branch, rolled it back, or completed it with mixed results (committed some work \nand rolled back other work). \n\nAn RM that reports heuristic completion to the TM must not discard its knowledge of \nthe transaction branch. The TM calls the RM once more to authorise it to forget the \nbranch. This requirement means that the RM must notify the TM of all heuristic \ndecisions, even those that match the decision the TM requested. The referenced \nOSI DTP specifications (model) and (service) define heuristics more precisely. \n--------------------------------------------------\n\n\n> https://www.ibm.com/docs/ja/db2-for-zos/11?topic=support-example-distr\n> ibuted-transaction-that-uses-jta-methods\n> This suggests that both XAResoruce.prepare() and commit() can throw a\n> exception.\n\nYes, XAResource() can throw an exception:\n\nvoid commit(Xid xid, boolean onePhase) throws XAException \n\nThrows: XAException \nAn error has occurred. Possible XAExceptions are XA_HEURHAZ, XA_HEURCOM, \nXA_HEURRB, XA_HEURMIX, XAER_RMERR, XAER_RMFAIL, XAER_NOTA, \nXAER_INVAL, or XAER_PROTO. \n\nThis is equivalent to xa_commit() in the XA specification. xa_commit() can return an error code that have the same names as above.\n\nThe question we're trying to answer here is:\n\n* How such an error should be handled?\nGlassfish (and possibly other Java EE servers) catch the error, continue to commit the rest of participants, and handle the failed resource manager's commit in the background. In Postgres, if we allow FDWs to do ereport(ERROR), how can we do similar things?\n\n* Should we report the error to the client? If yes, should it be reported as a failure of commit, or as an informational message (WARNING) of a successful commit? Why does the client want to know the error, where the global transaction's commit has been promised?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Thu, 10 Jun 2021 07:08:37 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 4:04 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> Why does the user have to get an error? Once the local transaction has been prepared, which means all remote ones also have been prepared, the whole transaction is determined to commit. So, the user doesn't have to receive an error as long as the local node is alive.\n\nThat is completely unrealistic. As Sawada-san has pointed out\nrepeatedly, there are tons of things that can go wrong even after the\nremote side has prepared the transaction. Preparing a transaction only\npromises that the remote side will let you commit the transaction upon\nrequest. It doesn't guarantee that you'll be able to make the request.\nLike Sawada-san says, network problems, out of memory issues, or many\nother things could stop that from happening. Someone could come along\nin another session and run \"ROLLBACK PREPARED\" on the remote side, and\nnow the \"COMMIT PREPARED\" will never succeed no matter how many times\nyou try it. At least, not unless someone goes and creates a new\nprepared transaction with the same 2PC identifier, but then you won't\nbe committing the correct transaction anyway. Or someone could take\nthe remote server and drop it in a volcano. How do you propose that we\navoid giving the user an error after the remote server has been\ndropped into a volcano, even though the local node is still alive?\n\nAlso, leaving aside theoretical arguments, I think it's not\nrealistically possible for an FDW author to write code to commit a\nprepared transaction that will be safe in the context of running late\nin PrepareTransaction(), after we've already done\nRecordTransactionCommit(). Such code can't avoid throwing errors\nbecause it can't avoid performing operations and allocating memory.\nIt's already been mentioned that, if an ERROR is thrown, it would be\nreported to the user in place of the COMMIT acknowledgement that they\nare expecting. Now, it has also been suggested that we could downgrade\nthe ERROR to a WARNING and still report the COMMIT. That doesn't sound\neasy to do, because when the ERROR happens, control is going to jump\nto AbortTransaction(). But even if you could hack it so it works like\nthat, it doesn't really solve the problem. What about all of the other\nservers where the prepared transaction also needs to be committed? In\nthe design of PostgreSQL, in all circumstances, the way you recover\nfrom an error is to abort the transaction. That is what brings the\nsystem back to a clean state. You can't simply ignore the requirement\nto abort the transaction and keep doing more work. It will never be\nreliable, and Tom will instantaneously demand that any code works like\nthat be reverted -- and for good reason.\n\nI am not sure that it's 100% impossible to find a way to solve this\nproblem without just having the resolver do all the work, but I think\nit's going to be extremely difficult. We tried to figure out some\nvaguely similar things while working on undo, and it really didn't go\nvery well. The later stages of CommitTransaction() and\nAbortTransaction() are places where very few kinds of code are safe to\nexecute, and finding a way to patch around that problem is not simple\neither. If the resolver performance is poor, perhaps we could try to\nfind a way to improve it. I don't know. But I don't think it does any\ngood to say, well, no errors can occur after the remote transaction is\nprepared. That's clearly incorrect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 10 Jun 2021 12:33:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Robert Haas <robertmhaas@gmail.com>\r\n> That is completely unrealistic. As Sawada-san has pointed out\r\n> repeatedly, there are tons of things that can go wrong even after the\r\n> remote side has prepared the transaction. Preparing a transaction only\r\n> promises that the remote side will let you commit the transaction upon\r\n> request. It doesn't guarantee that you'll be able to make the request.\r\n> Like Sawada-san says, network problems, out of memory issues, or many\r\n> other things could stop that from happening. Someone could come along\r\n> in another session and run \"ROLLBACK PREPARED\" on the remote side, and\r\n> now the \"COMMIT PREPARED\" will never succeed no matter how many times\r\n> you try it. At least, not unless someone goes and creates a new\r\n> prepared transaction with the same 2PC identifier, but then you won't\r\n> be committing the correct transaction anyway. Or someone could take\r\n> the remote server and drop it in a volcano. How do you propose that we\r\n> avoid giving the user an error after the remote server has been\r\n> dropped into a volcano, even though the local node is still alive?\r\n\r\nI understand that. As I cited yesterday and possibly before, that's why xa_commit() returns various return codes. So, I have never suggested that FDWs should not report an error and always report success for the commit request. They should be allowed to report an error.\r\n\r\nThe question I have been asking is how. With that said, we should only have two options; one is the return value of the FDW commit routine, and the other is via ereport(ERROR). I suggested the possibility of the former, because if the FDW does ereport(ERROR), Postgres core (transaction manager) may have difficulty in handling the rest of the participants.\r\n\r\n\r\n> Also, leaving aside theoretical arguments, I think it's not\r\n> realistically possible for an FDW author to write code to commit a\r\n> prepared transaction that will be safe in the context of running late\r\n> in PrepareTransaction(), after we've already done\r\n> RecordTransactionCommit(). Such code can't avoid throwing errors\r\n> because it can't avoid performing operations and allocating memory.\r\n\r\nI'm not completely sure about this. I thought (and said) that the only thing the FDW does would be to send a commit request through an existing connection. So, I think it's not a severe restriction to require FDWs to do ereport(ERROR) during commits (of the second phase of 2PC.)\r\n\r\n\r\n> It's already been mentioned that, if an ERROR is thrown, it would be\r\n> reported to the user in place of the COMMIT acknowledgement that they\r\n> are expecting. Now, it has also been suggested that we could downgrade\r\n> the ERROR to a WARNING and still report the COMMIT. That doesn't sound\r\n> easy to do, because when the ERROR happens, control is going to jump\r\n> to AbortTransaction(). But even if you could hack it so it works like\r\n> that, it doesn't really solve the problem. What about all of the other\r\n> servers where the prepared transaction also needs to be committed? In\r\n> the design of PostgreSQL, in all circumstances, the way you recover\r\n> from an error is to abort the transaction. That is what brings the\r\n> system back to a clean state. You can't simply ignore the requirement\r\n> to abort the transaction and keep doing more work. It will never be\r\n> reliable, and Tom will instantaneously demand that any code works like\r\n> that be reverted -- and for good reason.\r\n\r\n(I took \"abort\" as the same as \"rollback\" here.) Once we've sent commit requests to some participants, we can't abort the transaction. If one FDW returned an error halfway, we need to send commit requests to the rest of participants.\r\n\r\nIt's a design question, as I repeatedly said, whether and how we should report the error of some participants to the client. For instance, how should we report the errors of multiple participants? Concatenate those error messages?\r\n\r\nAnyway, we should design the interface first, giving much thought and respecting the ideas of predecessors (TX/XA, MS DTC, JTA/JTS). Otherwise, we may end up like \"We implemented like this, so the interface is like this and it can only behave like this, although you may find it strange...\" That might be a situation similar to what your comment \"the design of PostgreSQL, in all circumstances, the way you recover from an error is to abort the transaction\" suggests -- Postgres doesn't have statement-level rollback.\r\n\r\n\r\n> I am not sure that it's 100% impossible to find a way to solve this\r\n> problem without just having the resolver do all the work, but I think\r\n> it's going to be extremely difficult. We tried to figure out some\r\n> vaguely similar things while working on undo, and it really didn't go\r\n> very well. The later stages of CommitTransaction() and\r\n> AbortTransaction() are places where very few kinds of code are safe to\r\n> execute, and finding a way to patch around that problem is not simple\r\n> either. If the resolver performance is poor, perhaps we could try to\r\n> find a way to improve it. I don't know. But I don't think it does any\r\n> good to say, well, no errors can occur after the remote transaction is\r\n> prepared. That's clearly incorrect.\r\n\r\nI don't think the resolver-based approach would bring us far enough. It's fundamentally a bottleneck. Such a background process should only handle commits whose requests failed to be sent due to server down.\r\n\r\nMy requests are only twofold and haven't changed for long: design the FDW interface that implementors can naturally follow, and design to ensure performance.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 11 Jun 2021 01:58:26 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 9:58 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> I understand that. As I cited yesterday and possibly before, that's why xa_commit() returns various return codes. So, I have never suggested that FDWs should not report an error and always report success for the commit request. They should be allowed to report an error.\n\nIn the text to which I was responding it seemed like you were saying\nthe opposite. Perhaps I misunderstood.\n\n> The question I have been asking is how. With that said, we should only have two options; one is the return value of the FDW commit routine, and the other is via ereport(ERROR). I suggested the possibility of the former, because if the FDW does ereport(ERROR), Postgres core (transaction manager) may have difficulty in handling the rest of the participants.\n\nI don't think that is going to work. It is very difficult to write\ncode that doesn't ever ERROR in PostgreSQL. It is not impossible if\nthe operation is trivial enough, but I think you're greatly\nunderestimating the complexity of committing the remote transaction.\nIf somebody had designed PostgreSQL so that every function returns a\nreturn code and every time you call some other function you check that\nreturn code and pass any error up to your own caller, then there would\nbe no problem here. But in fact the design was that at the first sign\nof trouble you throw an ERROR. It's not easy to depart from that\nprogramming model in just one place.\n\n> > Also, leaving aside theoretical arguments, I think it's not\n> > realistically possible for an FDW author to write code to commit a\n> > prepared transaction that will be safe in the context of running late\n> > in PrepareTransaction(), after we've already done\n> > RecordTransactionCommit(). Such code can't avoid throwing errors\n> > because it can't avoid performing operations and allocating memory.\n>\n> I'm not completely sure about this. I thought (and said) that the only thing the FDW does would be to send a commit request through an existing connection. So, I think it's not a severe restriction to require FDWs to do ereport(ERROR) during commits (of the second phase of 2PC.)\n\nTo send a commit request through an existing connection, you have to\nsend some bytes over the network using a send() or write() system\ncall. That can fail. Then you have to read the response back over the\nnetwork using recv() or read(). That can also fail. You also need to\nparse the result that you get from the remote side, which can also\nfail, because you could get back garbage for some reason. And\ndepending on the details, you might first need to construct the\nmessage you're going to send, which might be able to fail too. Also,\nthe data might be encrypted using SSL, so you might have to decrypt\nit, which can also fail, and you might need to encrypt data before\nsending it, which can fail. In fact, if you're using the OpenSSL,\ntrying to call SSL_read() or SSL_write() can both read and write data\nfrom the socket, even multiple times, so you have extra opportunities\nto fail.\n\n> (I took \"abort\" as the same as \"rollback\" here.) Once we've sent commit requests to some participants, we can't abort the transaction. If one FDW returned an error halfway, we need to send commit requests to the rest of participants.\n\nI understand that it's not possible to abort the local transaction to\nabort after it's been committed, but that doesn't mean that we're\ngoing to be able to send the commit requests to the rest of the\nparticipants. We want to be able to do that, certainly, but there's no\nguarantee that it's actually possible. Again, the remote servers may\nbe dropped into a volcano, or less seriously, we may not be able to\naccess them. Also, someone may kill off our session.\n\n> It's a design question, as I repeatedly said, whether and how we should report the error of some participants to the client. For instance, how should we report the errors of multiple participants? Concatenate those error messages?\n\nSure, I agree that there are some questions about how to report errors.\n\n> Anyway, we should design the interface first, giving much thought and respecting the ideas of predecessors (TX/XA, MS DTC, JTA/JTS). Otherwise, we may end up like \"We implemented like this, so the interface is like this and it can only behave like this, although you may find it strange...\" That might be a situation similar to what your comment \"the design of PostgreSQL, in all circumstances, the way you recover from an error is to abort the transaction\" suggests -- Postgres doesn't have statement-level rollback.\n\nI think that's a valid concern, but we also have to have a plan that\nis realistic. Some things are indeed not possible in PostgreSQL's\ndesign. Also, some of these problems are things everyone has to\nsomehow confront. There's no database doing 2PC that can't have a\nsituation where one of the machines disappears unexpectedly due to\nsome natural disaster or administrator interference. It might be the\ncase that our inability to do certain things safely during transaction\ncommit puts us out of compliance with the spec, but it can't be the\ncase that some other system has no possible failures during\ntransaction commit. The problem of the network potentially being\ndisconnected between one packet and the next exists in every system.\n\n> I don't think the resolver-based approach would bring us far enough. It's fundamentally a bottleneck. Such a background process should only handle commits whose requests failed to be sent due to server down.\n\nWhy is it fundamentally a bottleneck? It seems to me in some cases it\ncould scale better than any other approach. If we have to commit on\n100 shards in only one process we can only do those commits one at a\ntime. If we can use resolver processes we could do all 100 at once if\nthe user can afford to run that many resolvers, which should be way\nfaster. It is true that if the resolver does not have a connection\nopen and must open one, that might be slow, but presumably after that\nit can keep the connection open and reuse it for subsequent\ndistributed transactions. I don't really see why that should be\nparticularly slow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 11 Jun 2021 08:50:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/05/11 13:37, Masahiko Sawada wrote:\n> I've attached the updated patches that incorporated comments from\n> Zhihong and Ikeda-san.\n\nThanks for updating the patches!\n\nI'm still reading these patches, but I'd like to share some review comments\nthat I found so far.\n\n(1)\n+/* Remove the foreign transaction from FdwXactParticipants */\n+void\n+FdwXactUnregisterXact(UserMapping *usermapping)\n+{\n+\tAssert(IsTransactionState());\n+\tRemoveFdwXactEntry(usermapping->umid);\n+}\n\nCurrently there is no user of FdwXactUnregisterXact().\nThis function should be removed?\n\n\n(2)\nWhen I ran the regression test, I got the following failure.\n\n========= Contents of ./src/test/modules/test_fdwxact/regression.diffs\ndiff -U3 /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/expected/test_fdwxact.out /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/results/test_fdwxact.out\n--- /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/expected/test_fdwxact.out\t2021-06-10 02:19:43.808622747 +0000\n+++ /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/results/test_fdwxact.out\t2021-06-10 02:29:53.452410462 +0000\n@@ -174,7 +174,7 @@\n SELECT count(*) FROM pg_foreign_xacts;\n count\n -------\n- 1\n+ 4\n (1 row)\n\n\n(3)\n+\t\t\t\t errmsg(\"could not read foreign transaction state from xlog at %X/%X\",\n+\t\t\t\t\t\t(uint32) (lsn >> 32),\n+\t\t\t\t\t\t(uint32) lsn)));\n\nLSN_FORMAT_ARGS() should be used?\n\n\n(4)\n+extern void RecreateFdwXactFile(TransactionId xid, Oid umid, void *content,\n+\t\t\t\t\t\t\t\tint len);\n\nSince RecreateFdwXactFile() is used only in fdwxact.c,\nthe above \"extern\" is not necessary?\n\n\n(5)\n+2. Pre-Commit phase (1st phase of two-phase commit)\n+we record the corresponding WAL indicating that the foreign server is involved\n+with the current transaction before doing PREPARE all foreign transactions.\n+Thus, in case we loose connectivity to the foreign server or crash ourselves,\n+we will remember that we might have prepared tranascation on the foreign\n+server, and try to resolve it when connectivity is restored or after crash\n+recovery.\n\nSo currently FdwXactInsertEntry() calls XLogInsert() and XLogFlush() for\nXLOG_FDWXACT_INSERT WAL record. Additionally we should also wait there\nfor WAL record to be replicated to the standby if sync replication is enabled?\nOtherwise, when the failover happens, new primary (past-standby)\nmight not have enough XLOG_FDWXACT_INSERT WAL records and\nmight fail to find some in-doubt foreign transactions.\n\n\n(6)\nXLogFlush() is called for each foreign transaction. So if there are many\nforeign transactions, XLogFlush() is called too frequently. Which might\ncause unnecessary performance overhead? Instead, for example,\nwe should call XLogFlush() only at once in FdwXactPrepareForeignTransactions()\nafter inserting all WAL records for all foreign transactions?\n\n\n(7)\n \t/* Open connection; report that we'll create a prepared statement. */\n \tfmstate->conn = GetConnection(user, true, &fmstate->conn_state);\n+\tMarkConnectionModified(user);\n\nMarkConnectionModified() should be called also when TRUNCATE on\na foreign table is executed?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 12 Jun 2021 01:25:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Robert Haas <robertmhaas@gmail.com>\r\n> On Thu, Jun 10, 2021 at 9:58 PM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > The question I have been asking is how. With that said, we should only have\r\n> two options; one is the return value of the FDW commit routine, and the other is\r\n> via ereport(ERROR). I suggested the possibility of the former, because if the\r\n> FDW does ereport(ERROR), Postgres core (transaction manager) may have\r\n> difficulty in handling the rest of the participants.\r\n> \r\n> I don't think that is going to work. It is very difficult to write\r\n> code that doesn't ever ERROR in PostgreSQL. It is not impossible if\r\n> the operation is trivial enough, but I think you're greatly\r\n> underestimating the complexity of committing the remote transaction.\r\n> If somebody had designed PostgreSQL so that every function returns a\r\n> return code and every time you call some other function you check that\r\n> return code and pass any error up to your own caller, then there would\r\n> be no problem here. But in fact the design was that at the first sign\r\n> of trouble you throw an ERROR. It's not easy to depart from that\r\n> programming model in just one place.\r\n\r\n> > I'm not completely sure about this. I thought (and said) that the only thing\r\n> the FDW does would be to send a commit request through an existing\r\n> connection. So, I think it's not a severe restriction to require FDWs to do\r\n> ereport(ERROR) during commits (of the second phase of 2PC.)\r\n> \r\n> To send a commit request through an existing connection, you have to\r\n> send some bytes over the network using a send() or write() system\r\n> call. That can fail. Then you have to read the response back over the\r\n> network using recv() or read(). That can also fail. You also need to\r\n> parse the result that you get from the remote side, which can also\r\n> fail, because you could get back garbage for some reason. And\r\n> depending on the details, you might first need to construct the\r\n> message you're going to send, which might be able to fail too. Also,\r\n> the data might be encrypted using SSL, so you might have to decrypt\r\n> it, which can also fail, and you might need to encrypt data before\r\n> sending it, which can fail. In fact, if you're using the OpenSSL,\r\n> trying to call SSL_read() or SSL_write() can both read and write data\r\n> from the socket, even multiple times, so you have extra opportunities\r\n> to fail.\r\n\r\nI know sending a commit request may get an error from various underlying functions, but we're talking about the client side, not the Postgres's server side that could unexpectedly ereport(ERROR) somewhere. So, the new FDW commit routine won't lose control and can return an error code as its return value. For instance, the FDW commit routine for DBMS-X would typically be:\r\n\r\nint\r\nDBMSXCommit(...)\r\n{\r\n\tint ret;\r\n\r\n\t/* extract info from the argument to pass to xa_commit() */\r\n\r\n\tret = DBMSX_xa_commit(...);\r\n\t/* This is the actual commit function which is exposed to the app server (e.g. Tuxedo) through the xa_commit() interface */\r\n\r\n\t/* map xa_commit() return values to the corresponding return values of the FDW commit routine */\r\n\tswitch (ret)\r\n\t{\r\n\t\tcase XA_RMERR:\r\n\t\t\tret = ...;\r\n\t\t\tbreak;\r\n\t\t...\r\n\t}\r\n\r\n\treturn ret;\r\n}\r\n\r\n\r\n> I think that's a valid concern, but we also have to have a plan that\r\n> is realistic. Some things are indeed not possible in PostgreSQL's\r\n> design. Also, some of these problems are things everyone has to\r\n> somehow confront. There's no database doing 2PC that can't have a\r\n> situation where one of the machines disappears unexpectedly due to\r\n> some natural disaster or administrator interference. It might be the\r\n> case that our inability to do certain things safely during transaction\r\n> commit puts us out of compliance with the spec, but it can't be the\r\n> case that some other system has no possible failures during\r\n> transaction commit. The problem of the network potentially being\r\n> disconnected between one packet and the next exists in every system.\r\n\r\nSo, we need to design how commit behaves from the user's perspective. That's the functional design. We should figure out what's the desirable response of commit first, and then see if we can implement it or have to compromise in some way. I think we can reference the X/Open TX standard and/or JTS (Java Transaction Service) specification (I haven't had a chance to read them yet, though.) Just in case we can't find the requested commit behavior in the volcano case from those specifications, ... (I'm hesitant to say this because it may be hard,) it's desirable to follow representative products such as Tuxedo and GlassFish (the reference implementation of Java EE specs.)\r\n\r\n\r\n> > I don't think the resolver-based approach would bring us far enough. It's\r\n> fundamentally a bottleneck. Such a background process should only handle\r\n> commits whose requests failed to be sent due to server down.\r\n> \r\n> Why is it fundamentally a bottleneck? It seems to me in some cases it\r\n> could scale better than any other approach. If we have to commit on\r\n> 100 shards in only one process we can only do those commits one at a\r\n> time. If we can use resolver processes we could do all 100 at once if\r\n> the user can afford to run that many resolvers, which should be way\r\n> faster. It is true that if the resolver does not have a connection\r\n> open and must open one, that might be slow, but presumably after that\r\n> it can keep the connection open and reuse it for subsequent\r\n> distributed transactions. I don't really see why that should be\r\n> particularly slow.\r\n\r\nConcurrent transactions are serialized at the resolver. I heard that the current patch handles 2PC like this: the TM (transaction manager in Postgres core) requests prepare to the resolver, the resolver sends prepare to the remote server and wait for reply, the TM gets back control from the resolver, TM requests commit to the resolver, the resolver sends commit to the remote server and wait for reply, and TM gets back control. The resolver handles one transaction at a time.\r\n\r\nIn regard to the case where one session has to commit on multiple remote servers, we're talking about the asynchronous interface just like what the XA standard provides.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 14 Jun 2021 02:04:31 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 10:04 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> I know sending a commit request may get an error from various underlying functions, but we're talking about the client side, not the Postgres's server side that could unexpectedly ereport(ERROR) somewhere. So, the new FDW commit routine won't lose control and can return an error code as its return value. For instance, the FDW commit routine for DBMS-X would typically be:\n>\n> int\n> DBMSXCommit(...)\n> {\n> int ret;\n>\n> /* extract info from the argument to pass to xa_commit() */\n>\n> ret = DBMSX_xa_commit(...);\n> /* This is the actual commit function which is exposed to the app server (e.g. Tuxedo) through the xa_commit() interface */\n>\n> /* map xa_commit() return values to the corresponding return values of the FDW commit routine */\n> switch (ret)\n> {\n> case XA_RMERR:\n> ret = ...;\n> break;\n> ...\n> }\n>\n> return ret;\n> }\n\nWell, we're talking about running this commit routine from within\nCommitTransaction(), right? So I think it is in fact running in the\nserver. And if that's so, then you have to worry about how to make it\nrespond to interrupts. You can't just call some functions\nDBMSX_xa_commit() and wait for infinite time for it to return. Look at\npgfdw_get_result() for an example of what real code to do this looks\nlike.\n\n> So, we need to design how commit behaves from the user's perspective. That's the functional design. We should figure out what's the desirable response of commit first, and then see if we can implement it or have to compromise in some way. I think we can reference the X/Open TX standard and/or JTS (Java Transaction Service) specification (I haven't had a chance to read them yet, though.) Just in case we can't find the requested commit behavior in the volcano case from those specifications, ... (I'm hesitant to say this because it may be hard,) it's desirable to follow representative products such as Tuxedo and GlassFish (the reference implementation of Java EE specs.)\n\nHonestly, I am not quite sure what any specification has to say about\nthis. We're talking about what happens when a user does something with\na foreign table and then type COMMIT. That's all about providing a set\nof behaviors that are consistent with how PostgreSQL works in other\nsituations. You can't negotiate away the requirement to handle errors\nin a way that works with PostgreSQL's infrastructure, or the\nrequirement that any length operation handle interrupts properly, by\nappealing to a specification.\n\n> Concurrent transactions are serialized at the resolver. I heard that the current patch handles 2PC like this: the TM (transaction manager in Postgres core) requests prepare to the resolver, the resolver sends prepare to the remote server and wait for reply, the TM gets back control from the resolver, TM requests commit to the resolver, the resolver sends commit to the remote server and wait for reply, and TM gets back control. The resolver handles one transaction at a time.\n\nThat sounds more like a limitation of the present implementation than\na fundamental problem. We shouldn't reject the idea of having a\nresolver process handle this just because the initial implementation\nmight be slow. If there's no fundamental problem with the idea,\nparallelism and concurrency can be improved in separate patches at a\nlater time. It's much more important at this stage to reject ideas\nthat are not theoretically sound.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Jun 2021 12:08:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "From: Robert Haas <robertmhaas@gmail.com>\r\n> Well, we're talking about running this commit routine from within\r\n> CommitTransaction(), right? So I think it is in fact running in the\r\n> server. And if that's so, then you have to worry about how to make it\r\n> respond to interrupts. You can't just call some functions\r\n> DBMSX_xa_commit() and wait for infinite time for it to return. Look at\r\n> pgfdw_get_result() for an example of what real code to do this looks\r\n> like.\r\n\r\nPostgres can do that, but other implementations can not necessaily do it, I'm afraid. But before that, the FDW interface documentation doesn't describe anything about how to handle interrupts. Actually, odbc_fdw and possibly other FDWs don't respond to interrupts.\r\n\r\n\r\n> Honestly, I am not quite sure what any specification has to say about\r\n> this. We're talking about what happens when a user does something with\r\n> a foreign table and then type COMMIT. That's all about providing a set\r\n> of behaviors that are consistent with how PostgreSQL works in other\r\n> situations. You can't negotiate away the requirement to handle errors\r\n> in a way that works with PostgreSQL's infrastructure, or the\r\n> requirement that any length operation handle interrupts properly, by\r\n> appealing to a specification.\r\n\r\nWhat we're talking here is mainly whether commit should return success or failure when some participants failed to commit in the second phase of 2PC. That's new to Postgres, isn't it? Anyway, we should respect existing relevant specifications and (well-known) implementations before we conclude that we have to devise our own behavior.\r\n\r\n\r\n> That sounds more like a limitation of the present implementation than\r\n> a fundamental problem. We shouldn't reject the idea of having a\r\n> resolver process handle this just because the initial implementation\r\n> might be slow. If there's no fundamental problem with the idea,\r\n> parallelism and concurrency can be improved in separate patches at a\r\n> later time. It's much more important at this stage to reject ideas\r\n> that are not theoretically sound.\r\n\r\nWe talked about that, and unfortunately, I haven't seen a good and feasible idea to enhance the current approach that involves the resolver from the beginning of 2PC processing. Honestly, I don't understand why such a \"one prepare, one commit in turn\" serialization approach can be allowed in PostgreSQL where developers pursue best performance and even tries to refrain from adding an if statement in a hot path. As I showed and Ikeda-san said, other implementations have each client session send prepare and commit requests. That's a natural way to achieve reasonable concurrency and performance.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 15 Jun 2021 09:51:07 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 5:51 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> Postgres can do that, but other implementations can not necessaily do it, I'm afraid. But before that, the FDW interface documentation doesn't describe anything about how to handle interrupts. Actually, odbc_fdw and possibly other FDWs don't respond to interrupts.\n\nWell, I'd consider that a bug.\n\n> What we're talking here is mainly whether commit should return success or failure when some participants failed to commit in the second phase of 2PC. That's new to Postgres, isn't it? Anyway, we should respect existing relevant specifications and (well-known) implementations before we conclude that we have to devise our own behavior.\n\nSure ... but we can only decide to do things that the implementation\ncan support, and running code that might fail after we've committed\nlocally isn't one of them.\n\n> We talked about that, and unfortunately, I haven't seen a good and feasible idea to enhance the current approach that involves the resolver from the beginning of 2PC processing. Honestly, I don't understand why such a \"one prepare, one commit in turn\" serialization approach can be allowed in PostgreSQL where developers pursue best performance and even tries to refrain from adding an if statement in a hot path. As I showed and Ikeda-san said, other implementations have each client session send prepare and commit requests. That's a natural way to achieve reasonable concurrency and performance.\n\nI think your comparison here is quite unfair. We work hard to add\noverhead in hot paths where it might cost, but the FDW case involves a\nnetwork round-trip anyway, so the cost of an if-statement would surely\nbe insignificant. I feel like you want to assume without any evidence\nthat a local resolver can never be quick enough, even thought the cost\nof IPC between local processes shouldn't be that high compared to a\nnetwork round trip. But you also want to suppose that we can run code\nthat might fail late in the commit process even though there is lots\nof evidence that this will cause problems, starting with the code\ncomments that clearly say so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 16 Jun 2021 12:07:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\tFrom: Robert Haas <robertmhaas@gmail.com>\r\n> On Tue, Jun 15, 2021 at 5:51 AM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > Postgres can do that, but other implementations can not necessaily do it, I'm\r\n> afraid. But before that, the FDW interface documentation doesn't describe\r\n> anything about how to handle interrupts. Actually, odbc_fdw and possibly\r\n> other FDWs don't respond to interrupts.\r\n> \r\n> Well, I'd consider that a bug.\r\n\r\nI kind of hesitate to call it a bug... Unlike libpq, JDBC (for jdbc_fdw) doesn't have asynchronous interface, and Oracle and PostgreSQL ODBC drivers don't support asynchronous interface. Even with libpq, COMMIT (and other SQL commands) is not always cancellable, e.g., when the (NFS) storage server gets hand while writing WAL.\r\n\r\n\r\n> > What we're talking here is mainly whether commit should return success or\r\n> failure when some participants failed to commit in the second phase of 2PC.\r\n> That's new to Postgres, isn't it? Anyway, we should respect existing relevant\r\n> specifications and (well-known) implementations before we conclude that we\r\n> have to devise our own behavior.\r\n> \r\n> Sure ... but we can only decide to do things that the implementation\r\n> can support, and running code that might fail after we've committed\r\n> locally isn't one of them.\r\n\r\nYes, I understand that Postgres may not be able to conform to specifications or well-known implementations in all aspects. I'm just suggesting to take the stance \"We carefully considered established industry specifications that we can base on, did our best to design the desirable behavior learned from them, but couldn't implement a few parts\", rather than \"We did what we like and can do.\"\r\n\r\n\r\n> I think your comparison here is quite unfair. We work hard to add\r\n> overhead in hot paths where it might cost, but the FDW case involves a\r\n> network round-trip anyway, so the cost of an if-statement would surely\r\n> be insignificant. I feel like you want to assume without any evidence\r\n> that a local resolver can never be quick enough, even thought the cost\r\n> of IPC between local processes shouldn't be that high compared to a\r\n> network round trip. But you also want to suppose that we can run code\r\n> that might fail late in the commit process even though there is lots\r\n> of evidence that this will cause problems, starting with the code\r\n> comments that clearly say so.\r\n\r\nThere may be better examples. What I wanted to say is just that I believe it's not PG developers' standard to allow serial prepare and commit. Let's make it clear what's difficult to do the 2PC from each client session in normal operation without going through the resolver.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 18 Jun 2021 02:48:34 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi Sawada-san,\r\n\r\nI also tried to play a bit with the latest patches similar to Ikeda-san,\r\nand with foreign 2PC parameter enabled/required.\r\n\r\n> > >> b. about performance bottleneck (just share my simple benchmark\r\n> > >> results)\r\n> > >>\r\n> > >> The resolver process can be performance bottleneck easily although\r\n> > >> I think some users want this feature even if the performance is not so\r\n> good.\r\n> > >>\r\n> > >> I tested with very simple workload in my laptop.\r\n> > >>\r\n> > >> The test condition is\r\n> > >> * two remote foreign partitions and one transaction inserts an\r\n> > >> entry in each partitions.\r\n> > >> * local connection only. If NW latency became higher, the\r\n> > >> performance became worse.\r\n> > >> * pgbench with 8 clients.\r\n> > >>\r\n> > >> The test results is the following. The performance of 2PC is only\r\n> > >> 10% performance of the one of without 2PC.\r\n> > >>\r\n> > >> * with foreign_twophase_commit = requried\r\n> > >> -> If load with more than 10TPS, the number of unresolved foreign\r\n> > >> -> transactions\r\n> > >> is increasing and stop with the warning \"Increase\r\n> > >> max_prepared_foreign_transactions\".\r\n> > >\r\n> > > What was the value of max_prepared_foreign_transactions?\r\n> >\r\n> > Now, I tested with 200.\r\n> >\r\n> > If each resolution is finished very soon, I thought it's enough\r\n> > because 8clients x 2partitions = 16, though... But, it's difficult how\r\n> > to know the stable values.\r\n> \r\n> During resolving one distributed transaction, the resolver needs both one\r\n> round trip and fsync-ing WAL record for each foreign transaction.\r\n> Since the client doesn’t wait for the distributed transaction to be resolved,\r\n> the resolver process can be easily bottle-neck given there are 8 clients.\r\n> \r\n> If foreign transaction resolution was resolved synchronously, 16 would\r\n> suffice.\r\n\r\n\r\nI tested the V36 patches on my 16-core machines.\r\nI setup two foreign servers (F1, F2) .\r\nF1 has addressbook table.\r\nF2 has pgbench tables (scale factor = 1).\r\nThere is also 1 coordinator (coor) server where I created user mapping to access the foreign servers.\r\nI executed the benchmark measurement on coordinator.\r\nMy custom scripts are setup in a way that queries from coordinator\r\nwould have to access the two foreign servers.\r\n\r\nCoordinator:\r\nmax_prepared_foreign_transactions = 200\r\nmax_foreign_transaction_resolvers = 1\r\nforeign_twophase_commit = required\r\n\r\nOther external servers 1 & 2 (F1 & F2):\r\nmax_prepared_transactions = 100\r\n\r\n\r\n[select.sql]\r\n\\set int random(1, 100000)\r\nBEGIN;\r\nSELECT ad.name, ad.age, ac.abalance\r\nFROM addressbook ad, pgbench_accounts ac\r\nWHERE ad.id = :int AND ad.id = ac.aid;\r\nCOMMIT;\r\n\r\nI then executed:\r\npgbench -r -c 2 -j 2 -T 60 -f select.sql coor\r\n\r\nWhile there were no problems with 1-2 clients, I started having problems\r\nwhen running the benchmark with more than 3 clients.\r\n\r\npgbench -r -c 4 -j 4 -T 60 -f select.sql coor\r\n\r\nI got the following error on coordinator:\r\n\r\n[95396] ERROR: could not prepare transaction on server F2 with ID fx_151455979_1216200_16422\r\n[95396] STATEMENT: COMMIT;\r\nWARNING: there is no transaction in progress\r\npgbench: error: client 1 script 0 aborted in command 3 query 0: ERROR: could not prepare transaction on server F2 with ID fx_151455979_1216200_16422\r\n\r\nHere's the log on foreign server 2 <F2> matching the above error:\r\n<F2> LOG: statement: PREPARE TRANSACTION 'fx_151455979_1216200_16422'\r\n<F2> ERROR: maximum number of prepared transactions reached\r\n<F2> HINT: Increase max_prepared_transactions (currently 100).\r\n<F2> STATEMENT: PREPARE TRANSACTION 'fx_151455979_1216200_16422'\r\n\r\nSo I increased the max_prepared_transactions of <F1> and <F2> from 100 to 200.\r\nThen I got the error:\r\n\r\n[146926] ERROR: maximum number of foreign transactions reached\r\n[146926] HINT: Increase max_prepared_foreign_transactions: \"200\".\r\n\r\nSo I increased the max_prepared_foreign_transactions to \"300\",\r\nand got the same error of need to increase the max_prepared_transactions of foreign servers.\r\n\r\nI just can't find the right tuning values for this.\r\nIt seems that we always run out of memory in FdwXactState insert_fdwxact \r\nwith multiple concurrent connections during PREPARE TRANSACTION.\r\nThis one I only encountered for SELECT benchmark. \r\nAlthough I've got no problems with multiple connections for my custom scripts for\r\nUPDATE and INSERT benchmarks when I tested up to 30 clients.\r\n\r\nWould the following possibly solve this bottleneck problem?\r\n\r\n> > > To speed up the foreign transaction resolution, some ideas have been\r\n> > > discussed. As another idea, how about launching resolvers for each\r\n> > > foreign server? That way, we resolve foreign transactions on each\r\n> > > foreign server in parallel. If foreign transactions are concentrated\r\n> > > on the particular server, we can have multiple resolvers for the one\r\n> > > foreign server. It doesn’t change the fact that all foreign\r\n> > > transaction resolutions are processed by resolver processes.\r\n> >\r\n> > Awesome! There seems to be another pros that even if a foreign server\r\n> > is temporarily busy or stopped due to fail over, other foreign\r\n> > server's transactions can be resolved.\r\n> \r\n> Yes. We also might need to be careful about the order of foreign transaction\r\n> resolution. I think we need to resolve foreign transactions in arrival order at\r\n> least within a foreign server.\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n",
"msg_date": "Thu, 24 Jun 2021 12:46:47 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Sat, Jun 12, 2021 at 1:25 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/05/11 13:37, Masahiko Sawada wrote:\n> > I've attached the updated patches that incorporated comments from\n> > Zhihong and Ikeda-san.\n>\n> Thanks for updating the patches!\n>\n> I'm still reading these patches, but I'd like to share some review comments\n> that I found so far.\n\nThank you for the comments!\n\n>\n> (1)\n> +/* Remove the foreign transaction from FdwXactParticipants */\n> +void\n> +FdwXactUnregisterXact(UserMapping *usermapping)\n> +{\n> + Assert(IsTransactionState());\n> + RemoveFdwXactEntry(usermapping->umid);\n> +}\n>\n> Currently there is no user of FdwXactUnregisterXact().\n> This function should be removed?\n\nI think that this function can be used by other FDW implementations\nto unregister foreign transaction entry, although there is no use case\nin postgres_fdw. This function corresponds to xa_unreg in the XA\nspecification.\n\n>\n>\n> (2)\n> When I ran the regression test, I got the following failure.\n>\n> ========= Contents of ./src/test/modules/test_fdwxact/regression.diffs\n> diff -U3 /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/expected/test_fdwxact.out /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/results/test_fdwxact.out\n> --- /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/expected/test_fdwxact.out 2021-06-10 02:19:43.808622747 +0000\n> +++ /home/runner/work/postgresql/postgresql/src/test/modules/test_fdwxact/results/test_fdwxact.out 2021-06-10 02:29:53.452410462 +0000\n> @@ -174,7 +174,7 @@\n> SELECT count(*) FROM pg_foreign_xacts;\n> count\n> -------\n> - 1\n> + 4\n> (1 row)\n\nWIll fix.\n\n>\n>\n> (3)\n> + errmsg(\"could not read foreign transaction state from xlog at %X/%X\",\n> + (uint32) (lsn >> 32),\n> + (uint32) lsn)));\n>\n> LSN_FORMAT_ARGS() should be used?\n\nAgreed.\n\n>\n>\n> (4)\n> +extern void RecreateFdwXactFile(TransactionId xid, Oid umid, void *content,\n> + int len);\n>\n> Since RecreateFdwXactFile() is used only in fdwxact.c,\n> the above \"extern\" is not necessary?\n\nRight.\n\n>\n>\n> (5)\n> +2. Pre-Commit phase (1st phase of two-phase commit)\n> +we record the corresponding WAL indicating that the foreign server is involved\n> +with the current transaction before doing PREPARE all foreign transactions.\n> +Thus, in case we loose connectivity to the foreign server or crash ourselves,\n> +we will remember that we might have prepared tranascation on the foreign\n> +server, and try to resolve it when connectivity is restored or after crash\n> +recovery.\n>\n> So currently FdwXactInsertEntry() calls XLogInsert() and XLogFlush() for\n> XLOG_FDWXACT_INSERT WAL record. Additionally we should also wait there\n> for WAL record to be replicated to the standby if sync replication is enabled?\n> Otherwise, when the failover happens, new primary (past-standby)\n> might not have enough XLOG_FDWXACT_INSERT WAL records and\n> might fail to find some in-doubt foreign transactions.\n\nBut even if we wait for the record to be replicated, this problem\nisn't completely resolved, right? If the server crashes before the\nstandy receives the record and the failover happens then the new\nmaster doesn't have the record. I wonder if we need to have another\nFDW API in order to get the list of prepared transactions from the\nforeign server (FDW). For example in postgres_fdw case, it gets the\nlist of prepared transactions on the foreign server by executing a\nquery. It seems to me that this corresponds to xa_recover in the XA\nspecification.\n\n> (6)\n> XLogFlush() is called for each foreign transaction. So if there are many\n> foreign transactions, XLogFlush() is called too frequently. Which might\n> cause unnecessary performance overhead? Instead, for example,\n> we should call XLogFlush() only at once in FdwXactPrepareForeignTransactions()\n> after inserting all WAL records for all foreign transactions?\n\nAgreed.\n\n>\n>\n> (7)\n> /* Open connection; report that we'll create a prepared statement. */\n> fmstate->conn = GetConnection(user, true, &fmstate->conn_state);\n> + MarkConnectionModified(user);\n>\n> MarkConnectionModified() should be called also when TRUNCATE on\n> a foreign table is executed?\n\nGood catch. Will fix.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 24 Jun 2021 22:11:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Jun 24, 2021 at 9:46 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Hi Sawada-san,\n>\n> I also tried to play a bit with the latest patches similar to Ikeda-san,\n> and with foreign 2PC parameter enabled/required.\n\nThank you for testing the patch!\n\n>\n> > > >> b. about performance bottleneck (just share my simple benchmark\n> > > >> results)\n> > > >>\n> > > >> The resolver process can be performance bottleneck easily although\n> > > >> I think some users want this feature even if the performance is not so\n> > good.\n> > > >>\n> > > >> I tested with very simple workload in my laptop.\n> > > >>\n> > > >> The test condition is\n> > > >> * two remote foreign partitions and one transaction inserts an\n> > > >> entry in each partitions.\n> > > >> * local connection only. If NW latency became higher, the\n> > > >> performance became worse.\n> > > >> * pgbench with 8 clients.\n> > > >>\n> > > >> The test results is the following. The performance of 2PC is only\n> > > >> 10% performance of the one of without 2PC.\n> > > >>\n> > > >> * with foreign_twophase_commit = requried\n> > > >> -> If load with more than 10TPS, the number of unresolved foreign\n> > > >> -> transactions\n> > > >> is increasing and stop with the warning \"Increase\n> > > >> max_prepared_foreign_transactions\".\n> > > >\n> > > > What was the value of max_prepared_foreign_transactions?\n> > >\n> > > Now, I tested with 200.\n> > >\n> > > If each resolution is finished very soon, I thought it's enough\n> > > because 8clients x 2partitions = 16, though... But, it's difficult how\n> > > to know the stable values.\n> >\n> > During resolving one distributed transaction, the resolver needs both one\n> > round trip and fsync-ing WAL record for each foreign transaction.\n> > Since the client doesn’t wait for the distributed transaction to be resolved,\n> > the resolver process can be easily bottle-neck given there are 8 clients.\n> >\n> > If foreign transaction resolution was resolved synchronously, 16 would\n> > suffice.\n>\n>\n> I tested the V36 patches on my 16-core machines.\n> I setup two foreign servers (F1, F2) .\n> F1 has addressbook table.\n> F2 has pgbench tables (scale factor = 1).\n> There is also 1 coordinator (coor) server where I created user mapping to access the foreign servers.\n> I executed the benchmark measurement on coordinator.\n> My custom scripts are setup in a way that queries from coordinator\n> would have to access the two foreign servers.\n>\n> Coordinator:\n> max_prepared_foreign_transactions = 200\n> max_foreign_transaction_resolvers = 1\n> foreign_twophase_commit = required\n>\n> Other external servers 1 & 2 (F1 & F2):\n> max_prepared_transactions = 100\n>\n>\n> [select.sql]\n> \\set int random(1, 100000)\n> BEGIN;\n> SELECT ad.name, ad.age, ac.abalance\n> FROM addressbook ad, pgbench_accounts ac\n> WHERE ad.id = :int AND ad.id = ac.aid;\n> COMMIT;\n>\n> I then executed:\n> pgbench -r -c 2 -j 2 -T 60 -f select.sql coor\n>\n> While there were no problems with 1-2 clients, I started having problems\n> when running the benchmark with more than 3 clients.\n>\n> pgbench -r -c 4 -j 4 -T 60 -f select.sql coor\n>\n> I got the following error on coordinator:\n>\n> [95396] ERROR: could not prepare transaction on server F2 with ID fx_151455979_1216200_16422\n> [95396] STATEMENT: COMMIT;\n> WARNING: there is no transaction in progress\n> pgbench: error: client 1 script 0 aborted in command 3 query 0: ERROR: could not prepare transaction on server F2 with ID fx_151455979_1216200_16422\n>\n> Here's the log on foreign server 2 <F2> matching the above error:\n> <F2> LOG: statement: PREPARE TRANSACTION 'fx_151455979_1216200_16422'\n> <F2> ERROR: maximum number of prepared transactions reached\n> <F2> HINT: Increase max_prepared_transactions (currently 100).\n> <F2> STATEMENT: PREPARE TRANSACTION 'fx_151455979_1216200_16422'\n>\n> So I increased the max_prepared_transactions of <F1> and <F2> from 100 to 200.\n> Then I got the error:\n>\n> [146926] ERROR: maximum number of foreign transactions reached\n> [146926] HINT: Increase max_prepared_foreign_transactions: \"200\".\n>\n> So I increased the max_prepared_foreign_transactions to \"300\",\n> and got the same error of need to increase the max_prepared_transactions of foreign servers.\n>\n> I just can't find the right tuning values for this.\n> It seems that we always run out of memory in FdwXactState insert_fdwxact\n> with multiple concurrent connections during PREPARE TRANSACTION.\n> This one I only encountered for SELECT benchmark.\n> Although I've got no problems with multiple connections for my custom scripts for\n> UPDATE and INSERT benchmarks when I tested up to 30 clients.\n>\n> Would the following possibly solve this bottleneck problem?\n\nWith the following idea, the performance will get better but will not\nbe completely solved. Because those results shared by you and\nIkeda-san come from the fact that with the patch we asynchronously\ncommit the foreign prepared transaction (i.g., asynchronously\nperforming the second phase of 2PC), but not the architecture. As I\nmentioned before, I intentionally removed the synchronous committing\nforeign prepared transaction part from the patch set since we still\nneed to have a discussion of that part. Therefore, with this version\npatch, the backend returns OK to the client right after the local\ntransaction commits with neither committing foreign prepared\ntransactions by itself nor waiting for those to be committed by the\nresolver process. As long as the backend doesn’t wait for foreign\nprepared transactions to be committed and there is a limit of the\nnumber of foreign prepared transactions to be held, it could reach the\nupper bound if committing foreign prepared transactions cannot keep\nup.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 24 Jun 2021 22:27:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Thu, Jun 24, 2021 at 10:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Jun 12, 2021 at 1:25 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > (5)\n> > +2. Pre-Commit phase (1st phase of two-phase commit)\n> > +we record the corresponding WAL indicating that the foreign server is involved\n> > +with the current transaction before doing PREPARE all foreign transactions.\n> > +Thus, in case we loose connectivity to the foreign server or crash ourselves,\n> > +we will remember that we might have prepared tranascation on the foreign\n> > +server, and try to resolve it when connectivity is restored or after crash\n> > +recovery.\n> >\n> > So currently FdwXactInsertEntry() calls XLogInsert() and XLogFlush() for\n> > XLOG_FDWXACT_INSERT WAL record. Additionally we should also wait there\n> > for WAL record to be replicated to the standby if sync replication is enabled?\n> > Otherwise, when the failover happens, new primary (past-standby)\n> > might not have enough XLOG_FDWXACT_INSERT WAL records and\n> > might fail to find some in-doubt foreign transactions.\n>\n> But even if we wait for the record to be replicated, this problem\n> isn't completely resolved, right?\n\nAh, I misunderstood the order of writing WAL records and preparing\nforeign transactions. You're right. Combining your suggestion below,\nperhaps we need to write all WAL records, call XLogFlush(), wait for\nthose records to be replicated, and prepare all foreign transactions.\nEven in cases where the server crashes during preparing a foreign\ntransaction and the failover happens, the new master has all foreign\ntransaction entries. Some of them might not actually be prepared on\nthe foreign servers but it should not be a problem.\n\n> > (6)\n> > XLogFlush() is called for each foreign transaction. So if there are many\n> > foreign transactions, XLogFlush() is called too frequently. Which might\n> > cause unnecessary performance overhead? Instead, for example,\n> > we should call XLogFlush() only at once in FdwXactPrepareForeignTransactions()\n> > after inserting all WAL records for all foreign transactions?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 25 Jun 2021 08:40:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/06/24 22:27, Masahiko Sawada wrote:\n> On Thu, Jun 24, 2021 at 9:46 PM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n>>\n>> Hi Sawada-san,\n>>\n>> I also tried to play a bit with the latest patches similar to Ikeda-san,\n>> and with foreign 2PC parameter enabled/required.\n> \n> Thank you for testing the patch!\n> \n>>\n>>>>>> b. about performance bottleneck (just share my simple benchmark\n>>>>>> results)\n>>>>>>\n>>>>>> The resolver process can be performance bottleneck easily although\n>>>>>> I think some users want this feature even if the performance is not so\n>>> good.\n>>>>>>\n>>>>>> I tested with very simple workload in my laptop.\n>>>>>>\n>>>>>> The test condition is\n>>>>>> * two remote foreign partitions and one transaction inserts an\n>>>>>> entry in each partitions.\n>>>>>> * local connection only. If NW latency became higher, the\n>>>>>> performance became worse.\n>>>>>> * pgbench with 8 clients.\n>>>>>>\n>>>>>> The test results is the following. The performance of 2PC is only\n>>>>>> 10% performance of the one of without 2PC.\n>>>>>>\n>>>>>> * with foreign_twophase_commit = requried\n>>>>>> -> If load with more than 10TPS, the number of unresolved foreign\n>>>>>> -> transactions\n>>>>>> is increasing and stop with the warning \"Increase\n>>>>>> max_prepared_foreign_transactions\".\n>>>>>\n>>>>> What was the value of max_prepared_foreign_transactions?\n>>>>\n>>>> Now, I tested with 200.\n>>>>\n>>>> If each resolution is finished very soon, I thought it's enough\n>>>> because 8clients x 2partitions = 16, though... But, it's difficult how\n>>>> to know the stable values.\n>>>\n>>> During resolving one distributed transaction, the resolver needs both one\n>>> round trip and fsync-ing WAL record for each foreign transaction.\n>>> Since the client doesn’t wait for the distributed transaction to be resolved,\n>>> the resolver process can be easily bottle-neck given there are 8 clients.\n>>>\n>>> If foreign transaction resolution was resolved synchronously, 16 would\n>>> suffice.\n>>\n>>\n>> I tested the V36 patches on my 16-core machines.\n>> I setup two foreign servers (F1, F2) .\n>> F1 has addressbook table.\n>> F2 has pgbench tables (scale factor = 1).\n>> There is also 1 coordinator (coor) server where I created user mapping to access the foreign servers.\n>> I executed the benchmark measurement on coordinator.\n>> My custom scripts are setup in a way that queries from coordinator\n>> would have to access the two foreign servers.\n>>\n>> Coordinator:\n>> max_prepared_foreign_transactions = 200\n>> max_foreign_transaction_resolvers = 1\n>> foreign_twophase_commit = required\n>>\n>> Other external servers 1 & 2 (F1 & F2):\n>> max_prepared_transactions = 100\n>>\n>>\n>> [select.sql]\n>> \\set int random(1, 100000)\n>> BEGIN;\n>> SELECT ad.name, ad.age, ac.abalance\n>> FROM addressbook ad, pgbench_accounts ac\n>> WHERE ad.id = :int AND ad.id = ac.aid;\n>> COMMIT;\n>>\n>> I then executed:\n>> pgbench -r -c 2 -j 2 -T 60 -f select.sql coor\n>>\n>> While there were no problems with 1-2 clients, I started having problems\n>> when running the benchmark with more than 3 clients.\n>>\n>> pgbench -r -c 4 -j 4 -T 60 -f select.sql coor\n>>\n>> I got the following error on coordinator:\n>>\n>> [95396] ERROR: could not prepare transaction on server F2 with ID fx_151455979_1216200_16422\n>> [95396] STATEMENT: COMMIT;\n>> WARNING: there is no transaction in progress\n>> pgbench: error: client 1 script 0 aborted in command 3 query 0: ERROR: could not prepare transaction on server F2 with ID fx_151455979_1216200_16422\n>>\n>> Here's the log on foreign server 2 <F2> matching the above error:\n>> <F2> LOG: statement: PREPARE TRANSACTION 'fx_151455979_1216200_16422'\n>> <F2> ERROR: maximum number of prepared transactions reached\n>> <F2> HINT: Increase max_prepared_transactions (currently 100).\n>> <F2> STATEMENT: PREPARE TRANSACTION 'fx_151455979_1216200_16422'\n>>\n>> So I increased the max_prepared_transactions of <F1> and <F2> from 100 to 200.\n>> Then I got the error:\n>>\n>> [146926] ERROR: maximum number of foreign transactions reached\n>> [146926] HINT: Increase max_prepared_foreign_transactions: \"200\".\n>>\n>> So I increased the max_prepared_foreign_transactions to \"300\",\n>> and got the same error of need to increase the max_prepared_transactions of foreign servers.\n>>\n>> I just can't find the right tuning values for this.\n>> It seems that we always run out of memory in FdwXactState insert_fdwxact\n>> with multiple concurrent connections during PREPARE TRANSACTION.\n>> This one I only encountered for SELECT benchmark.\n>> Although I've got no problems with multiple connections for my custom scripts for\n>> UPDATE and INSERT benchmarks when I tested up to 30 clients.\n>>\n>> Would the following possibly solve this bottleneck problem?\n> \n> With the following idea, the performance will get better but will not\n> be completely solved. Because those results shared by you and\n> Ikeda-san come from the fact that with the patch we asynchronously\n> commit the foreign prepared transaction (i.g., asynchronously\n> performing the second phase of 2PC), but not the architecture. As I\n> mentioned before, I intentionally removed the synchronous committing\n> foreign prepared transaction part from the patch set since we still\n> need to have a discussion of that part. Therefore, with this version\n> patch, the backend returns OK to the client right after the local\n> transaction commits with neither committing foreign prepared\n> transactions by itself nor waiting for those to be committed by the\n> resolver process. As long as the backend doesn’t wait for foreign\n> prepared transactions to be committed and there is a limit of the\n> number of foreign prepared transactions to be held, it could reach the\n> upper bound if committing foreign prepared transactions cannot keep\n> up.\n\nHi Jamison-san, sawada-san,\n\nThanks for testing!\n\nFWIF, I tested using pgbench with \"--rate=\" option to know the server\ncan execute transactions with stable throughput. As sawada-san said,\nthe latest patch resolved second phase of 2PC asynchronously. So,\nit's difficult to control the stable throughput without \"--rate=\" option.\n\nI also worried what I should do when the error happened because to increase\n\"max_prepared_foreign_transaction\" doesn't work. Since too overloading may\nshow the error, is it better to add the case to the HINT message?\n\n\nBTW, if sawada-san already develop to run the resolver processes in parallel,\nwhy don't you measure performance improvement? Although Robert-san,\nTunakawa-san and so on are discussing what architecture is best, one\ndiscussion point is that there is a performance risk if adopting asynchronous\napproach. If we have promising solutions, I think we can make the discussion\nforward.\n\nIn my understanding, there are three improvement idea. First is that to make\nthe resolver processes run in parallel. Second is that to send \"COMMIT/ABORT\nPREPARED\" remote servers in bulk. Third is to stop syncing the WAL\nremove_fdwxact() after resolving is done, which I addressed in the mail sent\nat June 3rd, 13:56. Since third idea is not yet discussed, there may\nbe my misunderstanding.\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 25 Jun 2021 09:53:52 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/06/24 22:11, Masahiko Sawada wrote:\n> On Sat, Jun 12, 2021 at 1:25 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/05/11 13:37, Masahiko Sawada wrote:\n>> So currently FdwXactInsertEntry() calls XLogInsert() and XLogFlush() for\n>> XLOG_FDWXACT_INSERT WAL record. Additionally we should also wait there\n>> for WAL record to be replicated to the standby if sync replication is enabled?\n>> Otherwise, when the failover happens, new primary (past-standby)\n>> might not have enough XLOG_FDWXACT_INSERT WAL records and\n>> might fail to find some in-doubt foreign transactions.\n> \n> But even if we wait for the record to be replicated, this problem\n> isn't completely resolved, right? If the server crashes before the\n> standy receives the record and the failover happens then the new\n> master doesn't have the record. I wonder if we need to have another\n> FDW API in order to get the list of prepared transactions from the\n> foreign server (FDW). For example in postgres_fdw case, it gets the\n> list of prepared transactions on the foreign server by executing a\n> query. It seems to me that this corresponds to xa_recover in the XA\n> specification.\n\nFWIF, Citus implemented as sawada-san said above [1].\n\nSince each WAL record for PREPARE is flushing in the latest patch, the latency\nbecame too much, especially under synchronous replication. For example, the\ntransaction involving three foreign servers must wait to sync \"three\" WAL\nrecords for PREPARE and \"one\" WAL records for local commit in remote server\none by one sequentially. So, I think that Sawada-san's idea is good to improve\nthe latency although fdw developer's work increases.\n\n[1]\nSIGMOD 2021 525 Citus: Distributed PostgreSQL for Data Intensive Applications\n From 12:27 says that how to solve unresolved prepared xacts.\nhttps://www.youtube.com/watch?v=AlF4C60FdlQ&list=PL3xUNnH4TdbsfndCMn02BqAAgGB0z7cwq\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 25 Jun 2021 10:33:23 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jun 25, 2021 at 9:53 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi Jamison-san, sawada-san,\n>\n> Thanks for testing!\n>\n> FWIF, I tested using pgbench with \"--rate=\" option to know the server\n> can execute transactions with stable throughput. As sawada-san said,\n> the latest patch resolved second phase of 2PC asynchronously. So,\n> it's difficult to control the stable throughput without \"--rate=\" option.\n>\n> I also worried what I should do when the error happened because to increase\n> \"max_prepared_foreign_transaction\" doesn't work. Since too overloading may\n> show the error, is it better to add the case to the HINT message?\n>\n> BTW, if sawada-san already develop to run the resolver processes in parallel,\n> why don't you measure performance improvement? Although Robert-san,\n> Tunakawa-san and so on are discussing what architecture is best, one\n> discussion point is that there is a performance risk if adopting asynchronous\n> approach. If we have promising solutions, I think we can make the discussion\n> forward.\n\nYeah, if we can asynchronously resolve the distributed transactions\nwithout worrying about max_prepared_foreign_transaction error, it\nwould be good. But we will need synchronous resolution at some point.\nI think we at least need to discuss it at this point.\n\nI've attached the new version patch that incorporates the comments\nfrom Fujii-san and Ikeda-san I got so far. We launch a resolver\nprocess per foreign server, committing prepared foreign transactions\non foreign servers in parallel. To get a better performance based on\nthe current architecture, we can have multiple resolver processes per\nforeign server but it seems not easy to tune it in practice. Perhaps\nis it better if we simply have a pool of resolver processes and we\nassign a resolver process to the resolution of one distributed\ntransaction one by one? That way, we need to launch resolver processes\nas many as the concurrent backends using 2PC.\n\n> In my understanding, there are three improvement idea. First is that to make\n> the resolver processes run in parallel. Second is that to send \"COMMIT/ABORT\n> PREPARED\" remote servers in bulk. Third is to stop syncing the WAL\n> remove_fdwxact() after resolving is done, which I addressed in the mail sent\n> at June 3rd, 13:56. Since third idea is not yet discussed, there may\n> be my misunderstanding.\n\nYes, those optimizations are promising. On the other hand, they could\nintroduce complexity to the code and APIs. I'd like to keep the first\nversion simple. I think we need to discuss them at this stage but can\nleave the implementation of both parallel execution and batch\nexecution as future improvements.\n\nFor the third idea, I think the implementation was wrong; it removes\nthe state file then flushes the WAL record. I think these should be\nperformed in the reverse order. Otherwise, FdwXactState entry could be\nleft on the standby if the server crashes between them. I might be\nmissing something though.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 30 Jun 2021 10:05:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/06/30 10:05, Masahiko Sawada wrote:\n> On Fri, Jun 25, 2021 at 9:53 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>\n>> Hi Jamison-san, sawada-san,\n>>\n>> Thanks for testing!\n>>\n>> FWIF, I tested using pgbench with \"--rate=\" option to know the server\n>> can execute transactions with stable throughput. As sawada-san said,\n>> the latest patch resolved second phase of 2PC asynchronously. So,\n>> it's difficult to control the stable throughput without \"--rate=\" option.\n>>\n>> I also worried what I should do when the error happened because to increase\n>> \"max_prepared_foreign_transaction\" doesn't work. Since too overloading may\n>> show the error, is it better to add the case to the HINT message?\n>>\n>> BTW, if sawada-san already develop to run the resolver processes in parallel,\n>> why don't you measure performance improvement? Although Robert-san,\n>> Tunakawa-san and so on are discussing what architecture is best, one\n>> discussion point is that there is a performance risk if adopting asynchronous\n>> approach. If we have promising solutions, I think we can make the discussion\n>> forward.\n> \n> Yeah, if we can asynchronously resolve the distributed transactions\n> without worrying about max_prepared_foreign_transaction error, it\n> would be good. But we will need synchronous resolution at some point.\n> I think we at least need to discuss it at this point.\n> \n> I've attached the new version patch that incorporates the comments\n> from Fujii-san and Ikeda-san I got so far. We launch a resolver\n> process per foreign server, committing prepared foreign transactions\n> on foreign servers in parallel. To get a better performance based on\n> the current architecture, we can have multiple resolver processes per\n> foreign server but it seems not easy to tune it in practice. Perhaps\n> is it better if we simply have a pool of resolver processes and we\n> assign a resolver process to the resolution of one distributed\n> transaction one by one? That way, we need to launch resolver processes\n> as many as the concurrent backends using 2PC.\n\nThanks for updating the patches.\n\nI have tested in my local laptop and summary is the following.\n\n(1) The latest patch(v37) can improve throughput by 1.5 times compared to v36.\n\nAlthough I expected it improves by 2.0 times because the workload is that one\ntransaction access two remote servers... I think the reason is that the disk\nis bottleneck and I couldn't prepare disks for each postgresql servers. If I\ncould, I think the performance can be improved by 2.0 times.\n\n\n(2) The latest patch(v37) throughput of foreign_twophase_commit = required is\nabout 36% compared to the case if foreign_twophase_commit = disabled.\n\nAlthough the throughput is improved, the absolute performance is not good. It\nmay be the fate of 2PC. I think the reason is that the number of WAL writes is\nmuch increase and, the disk writes in my laptop is the bottleneck. I want to\nknow the result testing in richer environments if someone can do so.\n\n\n(3) The latest patch(v37) has no overhead if foreign_twophase_commit =\ndisabled. On the contrary, the performance improved by 3%. It may be within\nthe margin of error.\n\n\n\nThe test detail is following.\n\n# condition\n\n* 1 coordinator and 3 foreign servers\n\n* 4 instance shared one ssd disk.\n\n* one transaction queries different two foreign servers.\n\n``` fxact_update.pgbench\n\\set id random(1, 1000000)\n\n\\set partnum 3\n\\set p1 random(1, :partnum)\n\\set p2 ((:p1 + 1) % :partnum) + 1\n\nBEGIN;\nUPDATE part:p1 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\nUPDATE part:p2 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\nCOMMIT;\n```\n\n* pgbench generates load. I increased ${RATE} little by little until \"maximum\nnumber of foreign transactions reached\" error happens.\n\n```\npgbench -f fxact_update.pgbench -R ${RATE} -c 8 -j 8 -T 180\n```\n\n* parameters\nmax_prepared_transactions = 100\nmax_prepared_foreign_transactions = 200\nmax_foreign_transaction_resolvers = 4\n\n\n# test source code patterns\n\n1. 2pc patches(v36) based on 6d0eb385 (foreign_twophase_commit = required).\n2. 2pc patches(v37) based on 2595e039 (foreign_twophase_commit = required).\n3. 2pc patches(v37) based on 2595e039 (foreign_twophase_commit = disabled).\n4. 2595e039 without 2pc patches(v37).\n\n\n# results\n\n1. tps = 241.8000TPS\n latency average = 10.413ms\n\n2. tps = 359.017519 ( by 1.5 times compared to 1. by 0.36% compared to 3.)\n latency average = 15.427ms\n\n3. tps = 987.372220 ( by 1.03% compared to 4. )\n latency average = 8.102ms\n\n4. tps = 955.984574\n latency average = 8.368ms\n\nThe disk is the bottleneck in my environment because disk util is almost 100%\nin every pattern. If disks for each instance can be prepared, I think we can\nexpect more performance improvements.\n\n\n>> In my understanding, there are three improvement idea. First is that to make\n>> the resolver processes run in parallel. Second is that to send \"COMMIT/ABORT\n>> PREPARED\" remote servers in bulk. Third is to stop syncing the WAL\n>> remove_fdwxact() after resolving is done, which I addressed in the mail sent\n>> at June 3rd, 13:56. Since third idea is not yet discussed, there may\n>> be my misunderstanding.\n> \n> Yes, those optimizations are promising. On the other hand, they could\n> introduce complexity to the code and APIs. I'd like to keep the first\n> version simple. I think we need to discuss them at this stage but can\n> leave the implementation of both parallel execution and batch\n> execution as future improvements.\n\nOK, I agree.\n\n\n> For the third idea, I think the implementation was wrong; it removes\n> the state file then flushes the WAL record. I think these should be\n> performed in the reverse order. Otherwise, FdwXactState entry could be\n> left on the standby if the server crashes between them. I might be\n> missing something though.\n\nOh, I see. I think you're right though what you wanted to say is that it\nflushes the WAL records then removes the state file. If \"COMMIT/ABORT\nPREPARED\" statements execute in bulk, it seems enough to sync the wal only\nonce, then remove all related state files.\n\n\nBTW, I tested the binary building with -O2, and I got the following warnings.\nIt's needed to be fixed.\n\n```\nfdwxact.c: In function 'PrepareAllFdwXacts':\nfdwxact.c:897:13: warning: 'flush_lsn' may be used uninitialized in this\nfunction [-Wmaybe-uninitialized]\n 897 | canceled = SyncRepWaitForLSN(flush_lsn, false);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n```\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 5 Jul 2021 15:28:58 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\n\n\nI'm interested in this patch and I also run the same test with Ikeda-san's fxact_update.pgbench.\nIn my environment (poor spec VM), the result is following.\n\n* foreign_twophase_commit = disabled\n363tps\n\n* foreign_twophase_commit = required (It is necessary to set -R ${RATE} as Ikeda-san said)\n13tps\n\n\nI analyzed the bottleneck using pstack and strace.\nI noticed that the open() during \"COMMIT PREPARED\" command is very slow.\n\nIn my environment the latency of the \"COMMIT PREPARED\" is 16ms.\n(On the other hand, the latency of \"COMMIT\" and \"PREPARE TRANSACTION\" is 1ms)\nIn the \"COMMIT PREPARED\" command, open() for wal segment file takes 14ms.\nTherefore, open() is the bottleneck of \"COMMIT PREPARED\".\nFurthermore, I noticed that the backend process almost always open the same wal segment file.\n\nIn the current patch, the backend process on foreign server which is associated with the connection from the resolver process always run \"COMMIT PREPARED\" command.\nTherefore, the wal segment file of the current \"COMMIT PREPARED\" command probably be the same with the previous \"COMMIT PREPARED\" command.\n\nIn order to improve the performance of the resolver process, I think it is useful to skip closing wal segment file during the \"COMMIT PREPARED\" and reuse file descriptor.\nIs it possible?\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Tue, 6 Jul 2021 06:15:44 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Wed, June 30, 2021 10:06 (GMT+9), Masahiko Sawada wrote:\r\n> I've attached the new version patch that incorporates the comments from\r\n> Fujii-san and Ikeda-san I got so far. We launch a resolver process per foreign\r\n> server, committing prepared foreign transactions on foreign servers in parallel.\r\n\r\nHi Sawada-san,\r\nThank you for the latest set of patches.\r\nI've noticed from cfbot that the regression test failed, and I also could not compile it.\r\n\r\n============== running regression test queries ==============\r\ntest test_fdwxact ... FAILED 21 ms\r\n============== shutting down postmaster ==============\r\n======================\r\n 1 of 1 tests failed. \r\n======================\r\n\r\n> To get a better performance based on the current architecture, we can have\r\n> multiple resolver processes per foreign server but it seems not easy to tune it\r\n> in practice. Perhaps is it better if we simply have a pool of resolver processes\r\n> and we assign a resolver process to the resolution of one distributed\r\n> transaction one by one? That way, we need to launch resolver processes as\r\n> many as the concurrent backends using 2PC.\r\n\r\nYes, finding the right value to tune of of max_foreign_prepared_transactions and\r\nmax_prepared_transactions seem difficult. If we set the number of resolver\r\nprocess to number of concurrent backends using 2PC, how do we determine\r\nthe value of max_foreign_transaction_resolvers? It might be good to set some\r\nstatistics to judge the value, then we can compare the performance from the V37\r\nversion.\r\n\r\n-\r\nAlso, this is a bit of side topic, and I know we've been discussing how to \r\nimprove/fix the resolver process bottlenecks, and Takahashi-san provided\r\nthe details above thread where V37 has problems. (I am joining the testing too.)\r\n\r\nI am not sure if this has been brought up before because of the years of\r\nthread. But I think that there is a need to consider the need to prevent for the\r\nresolver process from an infinite wait loop of resolving a prepared foreign\r\ntransaction. Currently, when a crashed foreign server is recovered during\r\nresolution retries, the information is recovered from WAL and files,\r\nand the resolver process resumes the foreign transaction resolution.\r\nHowever, what if we cannot (or intentionally do not want to) recover the\r\ncrashed server after a long time?\r\n\r\nAn idea is to make the resolver process to automatically stop after some\r\nmaximum number of retries.\r\nWe can call the parameter as foreign_transaction_resolution_max_retry_count.\r\nThere may be a better name, but I followed the pattern from your patch.\r\n\r\nThe server downtime can be estimated considering the proposed parameter\r\nforeign_transaction_resolution_retry_interval (default 10s) from the\r\npatch set.\r\nIn addition, according to docs, \"a foreign server using the postgres_fdw\r\nforeign data wrapper can have the same options that libpq accepts in\r\nconnection strings\", so the connect_timeout set during CREATE SERVER can\r\nalso affect it.\r\n\r\nExample:\r\n CREATE SERVER's connect_timeout setting = 5s\r\n foreign_transaction_resolution_retry_interval = 10s\r\n foreign_transaction_resolution_max_retry_count = 3\r\n\r\n Estimated total time before resolver stops: \r\n = (5s) * (3 + 1) + (10s) * (3) = 50 s\r\n\r\n00s: 1st connect start\r\n05s: 1st connect timeout\r\n(retry interval)\r\n15s: 2nd connect start (1st retry)\r\n20s: 2nd connect timeout\r\n(retry interval)\r\n30s: 3rd connect start (2nd retry)\r\n35s: 3rd connect timeout\r\n(retry interval)\r\n45s: 4th connect start (3rd retry)\r\n50s: 4th connect timeout\r\n(resolver process stops)\r\n\r\nThen the resolver process will not wait indefinitely and will stop after\r\nsome time depending on the setting of the above parameters.\r\nThis could be the automatic implementation of pg_stop_foreign_xact_resolver.\r\nAssuming that resolver is stopped, then the crashed server is\r\ndecided to be restored, the user can then execute pg_resolve_foreign_xact().\r\nDo you think the idea is feasible and we can add it as part of the patch sets?\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Thu, 8 Jul 2021 03:27:55 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/06/30 10:05, Masahiko Sawada wrote:\n> I've attached the new version patch that incorporates the comments\n> from Fujii-san and Ikeda-san I got so far.\n\nThanks for updating the patches!\n\nI'm now reading 0001 and 0002 patches and wondering if we can commit them\nat first because they just provide independent basic mechanism for\nforeign transaction management.\n\nOne question regarding them is; Why did we add new API only for \"top\" foreign\ntransaction? Even with those patches, old API (CallSubXactCallbacks) is still\nbeing used for foreign subtransaction and xact_depth is still being managed\nin postgres_fdw layer (not PostgreSQL core). Is this intentional?\nSorry if this was already discussed before.\n\nAs far as I read the code, keep using old API for foreign subtransaction doesn't\ncause any actual bug. But it's just strange and half-baked to manage top and\nsub transaction in the differenet layer and to use old and new API for them.\n\nOTOH, I'm afraid that adding new (not-essential) API for foreign subtransaction\nmight increase the code complexity unnecessarily.\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 9 Jul 2021 15:26:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Sorry for the late reply.\n\nOn Mon, Jul 5, 2021 at 3:29 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/06/30 10:05, Masahiko Sawada wrote:\n> > On Fri, Jun 25, 2021 at 9:53 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> >>\n> >> Hi Jamison-san, sawada-san,\n> >>\n> >> Thanks for testing!\n> >>\n> >> FWIF, I tested using pgbench with \"--rate=\" option to know the server\n> >> can execute transactions with stable throughput. As sawada-san said,\n> >> the latest patch resolved second phase of 2PC asynchronously. So,\n> >> it's difficult to control the stable throughput without \"--rate=\" option.\n> >>\n> >> I also worried what I should do when the error happened because to increase\n> >> \"max_prepared_foreign_transaction\" doesn't work. Since too overloading may\n> >> show the error, is it better to add the case to the HINT message?\n> >>\n> >> BTW, if sawada-san already develop to run the resolver processes in parallel,\n> >> why don't you measure performance improvement? Although Robert-san,\n> >> Tunakawa-san and so on are discussing what architecture is best, one\n> >> discussion point is that there is a performance risk if adopting asynchronous\n> >> approach. If we have promising solutions, I think we can make the discussion\n> >> forward.\n> >\n> > Yeah, if we can asynchronously resolve the distributed transactions\n> > without worrying about max_prepared_foreign_transaction error, it\n> > would be good. But we will need synchronous resolution at some point.\n> > I think we at least need to discuss it at this point.\n> >\n> > I've attached the new version patch that incorporates the comments\n> > from Fujii-san and Ikeda-san I got so far. We launch a resolver\n> > process per foreign server, committing prepared foreign transactions\n> > on foreign servers in parallel. To get a better performance based on\n> > the current architecture, we can have multiple resolver processes per\n> > foreign server but it seems not easy to tune it in practice. Perhaps\n> > is it better if we simply have a pool of resolver processes and we\n> > assign a resolver process to the resolution of one distributed\n> > transaction one by one? That way, we need to launch resolver processes\n> > as many as the concurrent backends using 2PC.\n>\n> Thanks for updating the patches.\n>\n> I have tested in my local laptop and summary is the following.\n\nThank you for testing!\n\n>\n> (1) The latest patch(v37) can improve throughput by 1.5 times compared to v36.\n>\n> Although I expected it improves by 2.0 times because the workload is that one\n> transaction access two remote servers... I think the reason is that the disk\n> is bottleneck and I couldn't prepare disks for each postgresql servers. If I\n> could, I think the performance can be improved by 2.0 times.\n>\n>\n> (2) The latest patch(v37) throughput of foreign_twophase_commit = required is\n> about 36% compared to the case if foreign_twophase_commit = disabled.\n>\n> Although the throughput is improved, the absolute performance is not good. It\n> may be the fate of 2PC. I think the reason is that the number of WAL writes is\n> much increase and, the disk writes in my laptop is the bottleneck. I want to\n> know the result testing in richer environments if someone can do so.\n>\n>\n> (3) The latest patch(v37) has no overhead if foreign_twophase_commit =\n> disabled. On the contrary, the performance improved by 3%. It may be within\n> the margin of error.\n>\n>\n>\n> The test detail is following.\n>\n> # condition\n>\n> * 1 coordinator and 3 foreign servers\n>\n> * 4 instance shared one ssd disk.\n>\n> * one transaction queries different two foreign servers.\n>\n> ``` fxact_update.pgbench\n> \\set id random(1, 1000000)\n>\n> \\set partnum 3\n> \\set p1 random(1, :partnum)\n> \\set p2 ((:p1 + 1) % :partnum) + 1\n>\n> BEGIN;\n> UPDATE part:p1 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\n> UPDATE part:p2 SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\n> COMMIT;\n> ```\n>\n> * pgbench generates load. I increased ${RATE} little by little until \"maximum\n> number of foreign transactions reached\" error happens.\n>\n> ```\n> pgbench -f fxact_update.pgbench -R ${RATE} -c 8 -j 8 -T 180\n> ```\n>\n> * parameters\n> max_prepared_transactions = 100\n> max_prepared_foreign_transactions = 200\n> max_foreign_transaction_resolvers = 4\n>\n>\n> # test source code patterns\n>\n> 1. 2pc patches(v36) based on 6d0eb385 (foreign_twophase_commit = required).\n> 2. 2pc patches(v37) based on 2595e039 (foreign_twophase_commit = required).\n> 3. 2pc patches(v37) based on 2595e039 (foreign_twophase_commit = disabled).\n> 4. 2595e039 without 2pc patches(v37).\n>\n>\n> # results\n>\n> 1. tps = 241.8000TPS\n> latency average = 10.413ms\n>\n> 2. tps = 359.017519 ( by 1.5 times compared to 1. by 0.36% compared to 3.)\n> latency average = 15.427ms\n>\n> 3. tps = 987.372220 ( by 1.03% compared to 4. )\n> latency average = 8.102ms\n>\n> 4. tps = 955.984574\n> latency average = 8.368ms\n>\n> The disk is the bottleneck in my environment because disk util is almost 100%\n> in every pattern. If disks for each instance can be prepared, I think we can\n> expect more performance improvements.\n\nIt seems still not good performance. I'll also test using your script.\n\n>\n>\n> >> In my understanding, there are three improvement idea. First is that to make\n> >> the resolver processes run in parallel. Second is that to send \"COMMIT/ABORT\n> >> PREPARED\" remote servers in bulk. Third is to stop syncing the WAL\n> >> remove_fdwxact() after resolving is done, which I addressed in the mail sent\n> >> at June 3rd, 13:56. Since third idea is not yet discussed, there may\n> >> be my misunderstanding.\n> >\n> > Yes, those optimizations are promising. On the other hand, they could\n> > introduce complexity to the code and APIs. I'd like to keep the first\n> > version simple. I think we need to discuss them at this stage but can\n> > leave the implementation of both parallel execution and batch\n> > execution as future improvements.\n>\n> OK, I agree.\n>\n>\n> > For the third idea, I think the implementation was wrong; it removes\n> > the state file then flushes the WAL record. I think these should be\n> > performed in the reverse order. Otherwise, FdwXactState entry could be\n> > left on the standby if the server crashes between them. I might be\n> > missing something though.\n>\n> Oh, I see. I think you're right though what you wanted to say is that it\n> flushes the WAL records then removes the state file. If \"COMMIT/ABORT\n> PREPARED\" statements execute in bulk, it seems enough to sync the wal only\n> once, then remove all related state files.\n>\n>\n> BTW, I tested the binary building with -O2, and I got the following warnings.\n> It's needed to be fixed.\n>\n> ```\n> fdwxact.c: In function 'PrepareAllFdwXacts':\n> fdwxact.c:897:13: warning: 'flush_lsn' may be used uninitialized in this\n> function [-Wmaybe-uninitialized]\n> 897 | canceled = SyncRepWaitForLSN(flush_lsn, false);\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ```\n\nThank you for the report. I'll fix it in the next version patch.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 9 Jul 2021 20:36:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Sorry for the late reply.\n\nOn Tue, Jul 6, 2021 at 3:15 PM r.takahashi_2@fujitsu.com\n<r.takahashi_2@fujitsu.com> wrote:\n>\n> Hi,\n>\n>\n> I'm interested in this patch and I also run the same test with Ikeda-san's fxact_update.pgbench.\n\nThank you for testing!\n\n> In my environment (poor spec VM), the result is following.\n>\n> * foreign_twophase_commit = disabled\n> 363tps\n>\n> * foreign_twophase_commit = required (It is necessary to set -R ${RATE} as Ikeda-san said)\n> 13tps\n>\n>\n> I analyzed the bottleneck using pstack and strace.\n> I noticed that the open() during \"COMMIT PREPARED\" command is very slow.\n>\n> In my environment the latency of the \"COMMIT PREPARED\" is 16ms.\n> (On the other hand, the latency of \"COMMIT\" and \"PREPARE TRANSACTION\" is 1ms)\n> In the \"COMMIT PREPARED\" command, open() for wal segment file takes 14ms.\n> Therefore, open() is the bottleneck of \"COMMIT PREPARED\".\n> Furthermore, I noticed that the backend process almost always open the same wal segment file.\n>\n> In the current patch, the backend process on foreign server which is associated with the connection from the resolver process always run \"COMMIT PREPARED\" command.\n> Therefore, the wal segment file of the current \"COMMIT PREPARED\" command probably be the same with the previous \"COMMIT PREPARED\" command.\n>\n> In order to improve the performance of the resolver process, I think it is useful to skip closing wal segment file during the \"COMMIT PREPARED\" and reuse file descriptor.\n> Is it possible?\n\nNot sure but it might be possible to keep holding an xlogreader for\nreading PREPARE WAL records even after the transaction commit. But I\nwonder how much open() for wal segment file accounts for the total\nexecution time of 2PC. 2PC requires 2 network round trips for each\nparticipant. For example, if it took 500ms in total, we would not get\nbenefits much from the point of view of 2PC performance even if we\nimproved it from 14ms to 1ms.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 9 Jul 2021 20:38:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 3:26 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/06/30 10:05, Masahiko Sawada wrote:\n> > I've attached the new version patch that incorporates the comments\n> > from Fujii-san and Ikeda-san I got so far.\n>\n> Thanks for updating the patches!\n>\n> I'm now reading 0001 and 0002 patches and wondering if we can commit them\n> at first because they just provide independent basic mechanism for\n> foreign transaction management.\n>\n> One question regarding them is; Why did we add new API only for \"top\" foreign\n> transaction? Even with those patches, old API (CallSubXactCallbacks) is still\n> being used for foreign subtransaction and xact_depth is still being managed\n> in postgres_fdw layer (not PostgreSQL core). Is this intentional?\n\nYes, it's not needed for 2PC support and I was also concerned to add\ncomplexity to the core by adding new API for subscriptions that are\nnot necessarily necessary for 2PC.\n\n> As far as I read the code, keep using old API for foreign subtransaction doesn't\n> cause any actual bug. But it's just strange and half-baked to manage top and\n> sub transaction in the differenet layer and to use old and new API for them.\n\nThat's a valid concern. I'm really not sure what we should do here but\nI guess that even if we want to support subscriptions we have another\nAPI dedicated for subtransaction commit and rollback.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 9 Jul 2021 22:44:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi Sawada-san,\n\n\nThank you for your reply.\n\n> Not sure but it might be possible to keep holding an xlogreader for\n> reading PREPARE WAL records even after the transaction commit. But I\n> wonder how much open() for wal segment file accounts for the total\n> execution time of 2PC. 2PC requires 2 network round trips for each\n> participant. For example, if it took 500ms in total, we would not get\n> benefits much from the point of view of 2PC performance even if we\n> improved it from 14ms to 1ms.\n\nI made the patch based on your advice and re-run the test on the new machine.\n(The attached patch is just for test purpose.)\n\n\n* foreign_twophase_commit = disabled\n2686tps\n\n* foreign_twophase_commit = required (It is necessary to set -R ${RATE} as Ikeda-san said)\n311tps\n\n* foreign_twophase_commit = required with attached patch (It is not necessary to set -R ${RATE})\n2057tps\n\n\nThis indicate that if we can reduce the number of times to open() wal segment file during \"COMMIT PREPARED\", the performance can be improved.\n\nThis patch can skip closing wal segment file, but I don't know when we should close.\nOne idea is to close when the wal segment file is recycled, but it seems difficult for backend process to do so.\n\nBTW, in previous discussion, \"Send COMMIT PREPARED remote servers in bulk\" is proposed.\nI imagined the new SQL interface like \"COMMIT PREPARED 'prep_1', 'prep_2', ... 'prep_n'\".\nIf we can open wal segment file during bulk COMMIT PREPARED, we can not only reduce the times of communication, but also reduce the times of open() wal segment file.\n\n\nRegards,\nRyohei Takahashi",
"msg_date": "Tue, 13 Jul 2021 04:13:58 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Em ter., 13 de jul. de 2021 às 01:14, r.takahashi_2@fujitsu.com <\nr.takahashi_2@fujitsu.com> escreveu:\n\n> Hi Sawada-san,\n>\n>\n> Thank you for your reply.\n>\n> > Not sure but it might be possible to keep holding an xlogreader for\n> > reading PREPARE WAL records even after the transaction commit. But I\n> > wonder how much open() for wal segment file accounts for the total\n> > execution time of 2PC. 2PC requires 2 network round trips for each\n> > participant. For example, if it took 500ms in total, we would not get\n> > benefits much from the point of view of 2PC performance even if we\n> > improved it from 14ms to 1ms.\n>\n> I made the patch based on your advice and re-run the test on the new\n> machine.\n> (The attached patch is just for test purpose.)\n>\nWouldn't it be better to explicitly initialize the pointer with NULL?\nI think it's common in Postgres.\n\nstatic XLogReaderState *xlogreader = NULL;\n\n\n>\n> * foreign_twophase_commit = disabled\n> 2686tps\n>\n> * foreign_twophase_commit = required (It is necessary to set -R ${RATE} as\n> Ikeda-san said)\n> 311tps\n>\n> * foreign_twophase_commit = required with attached patch (It is not\n> necessary to set -R ${RATE})\n> 2057tps\n>\nNice results.\n\nregards,\nRanier Vilela\n\nEm ter., 13 de jul. de 2021 às 01:14, r.takahashi_2@fujitsu.com <r.takahashi_2@fujitsu.com> escreveu:Hi Sawada-san,\n\n\nThank you for your reply.\n\n> Not sure but it might be possible to keep holding an xlogreader for\n> reading PREPARE WAL records even after the transaction commit. But I\n> wonder how much open() for wal segment file accounts for the total\n> execution time of 2PC. 2PC requires 2 network round trips for each\n> participant. For example, if it took 500ms in total, we would not get\n> benefits much from the point of view of 2PC performance even if we\n> improved it from 14ms to 1ms.\n\nI made the patch based on your advice and re-run the test on the new machine.\n(The attached patch is just for test purpose.)Wouldn't it be better to explicitly initialize the pointer with NULL?I think it's common in Postgres.static XLogReaderState *xlogreader = NULL; \n\n\n* foreign_twophase_commit = disabled\n2686tps\n\n* foreign_twophase_commit = required (It is necessary to set -R ${RATE} as Ikeda-san said)\n311tps\n\n* foreign_twophase_commit = required with attached patch (It is not necessary to set -R ${RATE})\n2057tpsNice results.regards,Ranier Vilela",
"msg_date": "Tue, 13 Jul 2021 08:24:55 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\r\n\r\n\r\n> Wouldn't it be better to explicitly initialize the pointer with NULL?\r\n\r\nThank you for your advice.\r\nYou are correct.\r\n\r\nAnyway, I fixed it and re-run the performance test, it of course does not affect tps.\r\n\r\nRegards,\r\nRyohei Takahashi\r\n",
"msg_date": "Tue, 13 Jul 2021 21:34:19 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 1:14 PM r.takahashi_2@fujitsu.com\n<r.takahashi_2@fujitsu.com> wrote:\n>\n> Hi Sawada-san,\n>\n>\n> Thank you for your reply.\n>\n> > Not sure but it might be possible to keep holding an xlogreader for\n> > reading PREPARE WAL records even after the transaction commit. But I\n> > wonder how much open() for wal segment file accounts for the total\n> > execution time of 2PC. 2PC requires 2 network round trips for each\n> > participant. For example, if it took 500ms in total, we would not get\n> > benefits much from the point of view of 2PC performance even if we\n> > improved it from 14ms to 1ms.\n>\n> I made the patch based on your advice and re-run the test on the new machine.\n> (The attached patch is just for test purpose.)\n\nThank you for testing!\n\n>\n>\n> * foreign_twophase_commit = disabled\n> 2686tps\n>\n> * foreign_twophase_commit = required (It is necessary to set -R ${RATE} as Ikeda-san said)\n> 311tps\n>\n> * foreign_twophase_commit = required with attached patch (It is not necessary to set -R ${RATE})\n> 2057tps\n\nNice improvement!\n\nBTW did you test on the local? That is, the foreign servers are\nlocated on the same machine?\n\n>\n>\n> This indicate that if we can reduce the number of times to open() wal segment file during \"COMMIT PREPARED\", the performance can be improved.\n>\n> This patch can skip closing wal segment file, but I don't know when we should close.\n> One idea is to close when the wal segment file is recycled, but it seems difficult for backend process to do so.\n\nI guess it would be better to start a new thread for this improvement.\nThis idea helps not only 2PC case but also improves the\nCOMMIT/ROLLBACK PREPARED performance itself. Rather than thinking it\ntied with this patch, I think it's good if we can discuss this patch\nseparately and it gets committed alone.\n\n> BTW, in previous discussion, \"Send COMMIT PREPARED remote servers in bulk\" is proposed.\n> I imagined the new SQL interface like \"COMMIT PREPARED 'prep_1', 'prep_2', ... 'prep_n'\".\n> If we can open wal segment file during bulk COMMIT PREPARED, we can not only reduce the times of communication, but also reduce the times of open() wal segment file.\n\nWhat if we successfully committed 'prep_1' but an error happened\nduring committing another one for some reason (i.g., corrupted 2PC\nstate file, OOM etc)? We might return an error to the client but have\nalready committed 'prep_1'.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 14 Jul 2021 16:52:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi Sawada-san,\n\n\nThank you for your reply.\n\n> BTW did you test on the local? That is, the foreign servers are\n> located on the same machine?\n\nYes, I tested on the local since I cannot prepare the good network now.\n\n\n> I guess it would be better to start a new thread for this improvement.\n\nThank you for your advice.\nI started a new thread [1].\n\n\n> What if we successfully committed 'prep_1' but an error happened\n> during committing another one for some reason (i.g., corrupted 2PC\n> state file, OOM etc)? We might return an error to the client but have\n> already committed 'prep_1'.\n\nSorry, I don't have good idea now.\nI imagined the command returns the list of the transaction id which ends with error.\n\n\n[1]\nhttps://www.postgresql.org/message-id/OS0PR01MB56828019B25CD5190AB6093282129%40OS0PR01MB5682.jpnprd01.prod.outlook.com\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Thu, 15 Jul 2021 09:25:16 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/07/09 22:44, Masahiko Sawada wrote:\n> On Fri, Jul 9, 2021 at 3:26 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> As far as I read the code, keep using old API for foreign subtransaction doesn't\n>> cause any actual bug. But it's just strange and half-baked to manage top and\n>> sub transaction in the differenet layer and to use old and new API for them.\n> \n> That's a valid concern. I'm really not sure what we should do here but\n> I guess that even if we want to support subscriptions we have another\n> API dedicated for subtransaction commit and rollback.\nOk, so if possible I will write POC patch for new API for foreign subtransactions\nand consider whether it's enough simple that we can commit into core or not.\n\n\n+#define FDWXACT_FLAG_PARALLEL_WORKER\t0x02\t/* is parallel worker? */\n\nThis implies that parallel workers may execute PREPARE TRANSACTION and\nCOMMIT/ROLLBACK PREPARED to the foreign server for atomic commit?\nIf so, what happens if the PREPARE TRANSACTION that one of\nparallel workers issues fails? In this case, not only that parallel worker\nbut also other parallel workers and the leader should rollback the transaction\nat all. That is, they should issue ROLLBACK PREPARED to the foreign servers.\nThis issue was already handled and addressed in the patches?\n\nThis seems not actual issue if only postgres_fdw is used. Because postgres_fdw\ndoesn't have IsForeignScanParallelSafe API. Right? But what about other FDW?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 16 Jul 2021 00:38:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi Sawada-san,\r\n\r\nI noticed that this thread and its set of patches have been marked with \"Returned with Feedback\" by yourself.\r\nI find the feature (atomic commit for foreign transactions) very useful\r\nand it will pave the road for having a distributed transaction management in Postgres.\r\nAlthough we have not arrived at consensus at which approach is best,\r\nthere were significant reviews and major patch changes in the past 2 years.\r\nBy any chance, do you have any plans to continue this from where you left off?\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n",
"msg_date": "Tue, 5 Oct 2021 00:55:59 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\n\nOn Tue, Oct 5, 2021 at 9:56 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Hi Sawada-san,\n>\n> I noticed that this thread and its set of patches have been marked with \"Returned with Feedback\" by yourself.\n> I find the feature (atomic commit for foreign transactions) very useful\n> and it will pave the road for having a distributed transaction management in Postgres.\n> Although we have not arrived at consensus at which approach is best,\n> there were significant reviews and major patch changes in the past 2 years.\n> By any chance, do you have any plans to continue this from where you left off?\n\nAs I could not reply to the review comments from Fujii-san for almost\nthree months, I don't have enough time to move this project forward at\nleast for now. That's why I marked this patch as RWF. I’d like to\ncontinue working on this project in my spare time but I know this is\nnot a project that can be completed by using only my spare time. If\nsomeone wants to work on this project, I’d appreciate it and am happy\nto help.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 5 Oct 2021 10:38:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/10/05 10:38, Masahiko Sawada wrote:\n> Hi,\n> \n> On Tue, Oct 5, 2021 at 9:56 AM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n>>\n>> Hi Sawada-san,\n>>\n>> I noticed that this thread and its set of patches have been marked with \"Returned with Feedback\" by yourself.\n>> I find the feature (atomic commit for foreign transactions) very useful\n>> and it will pave the road for having a distributed transaction management in Postgres.\n>> Although we have not arrived at consensus at which approach is best,\n>> there were significant reviews and major patch changes in the past 2 years.\n>> By any chance, do you have any plans to continue this from where you left off?\n> \n> As I could not reply to the review comments from Fujii-san for almost\n> three months, I don't have enough time to move this project forward at\n> least for now. That's why I marked this patch as RWF. I’d like to\n> continue working on this project in my spare time but I know this is\n> not a project that can be completed by using only my spare time. If\n> someone wants to work on this project, I’d appreciate it and am happy\n> to help.\n\nProbably it's time to rethink the approach. The patch introduces\nforeign transaction manager into PostgreSQL core, but as far as\nI review the patch, its changes look overkill and too complicated.\nThis seems one of reasons why we could not have yet committed\nthe feature even after several years.\n\nAnother concern about the approach of the patch is that it needs\nto change a backend so that it additionally waits for replication\nduring commit phase before executing PREPARE TRANSACTION\nto foreign servers. Which would decrease the performance\nduring commit phase furthermore.\n\nSo I wonder if it's worth revisiting the original approach, i.e.,\nadd the atomic commit into postgres_fdw. One disadvantage of\nthis is that it supports atomic commit only between foreign\nPostgreSQL servers, not other various data resources like MySQL.\nBut I'm not sure if we really want to do atomic commit between\nvarious FDWs. Maybe supporting only postgres_fdw is enough\nfor most users. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 6 Oct 2021 12:03:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi Fujii-san and Sawada-san,\r\n\r\nThank you very much for your replies.\r\n\r\n> >> I noticed that this thread and its set of patches have been marked with\r\n> \"Returned with Feedback\" by yourself.\r\n> >> I find the feature (atomic commit for foreign transactions) very\r\n> >> useful and it will pave the road for having a distributed transaction\r\n> management in Postgres.\r\n> >> Although we have not arrived at consensus at which approach is best,\r\n> >> there were significant reviews and major patch changes in the past 2 years.\r\n> >> By any chance, do you have any plans to continue this from where you left off?\r\n> >\r\n> > As I could not reply to the review comments from Fujii-san for almost\r\n> > three months, I don't have enough time to move this project forward at\r\n> > least for now. That's why I marked this patch as RWF. I’d like to\r\n> > continue working on this project in my spare time but I know this is\r\n> > not a project that can be completed by using only my spare time. If\r\n> > someone wants to work on this project, I’d appreciate it and am happy\r\n> > to help.\r\n> \r\n> Probably it's time to rethink the approach. The patch introduces foreign\r\n> transaction manager into PostgreSQL core, but as far as I review the patch, its\r\n> changes look overkill and too complicated.\r\n> This seems one of reasons why we could not have yet committed the feature even\r\n> after several years.\r\n> \r\n> Another concern about the approach of the patch is that it needs to change a\r\n> backend so that it additionally waits for replication during commit phase before\r\n> executing PREPARE TRANSACTION to foreign servers. Which would decrease the\r\n> performance during commit phase furthermore.\r\n> \r\n> So I wonder if it's worth revisiting the original approach, i.e., add the atomic\r\n> commit into postgres_fdw. One disadvantage of this is that it supports atomic\r\n> commit only between foreign PostgreSQL servers, not other various data\r\n> resources like MySQL.\r\n> But I'm not sure if we really want to do atomic commit between various FDWs.\r\n> Maybe supporting only postgres_fdw is enough for most users. Thought?\r\n\r\nThe intention of Sawada-san's patch is grand although would be very much helpful\r\nbecause it accommodates possible future support of atomic commit for\r\nvarious types of FDWs. However, it's difficult to get the agreement altogether,\r\nas other reviewers also point out the performance of commit. Another point is that\r\nhow it should work when we also implement atomic visibility (which is another\r\ntopic for distributed transactions but worth considering).\r\nThat said, if we're going to initially support it on postgres_fdw, which is simpler \r\nthan the latest patches, we need to ensure that abnormalities and errors\r\nare properly handled and prove that commit performance can be improved,\r\ne.g. if we can commit not in serial but also possible in parallel.\r\nAnd if possible, although not necessary during the first step, it may put at ease\r\nthe other reviewers if can we also think of the image on how to implement atomic\r\nvisibility on postgres_fdw. \r\nThoughts?\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Thu, 7 Oct 2021 04:28:57 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Hi,\n\nOn Thu, Oct 7, 2021 at 1:29 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n> That said, if we're going to initially support it on postgres_fdw, which is simpler\n> than the latest patches, we need to ensure that abnormalities and errors\n> are properly handled and prove that commit performance can be improved,\n> e.g. if we can commit not in serial but also possible in parallel.\n\nIf it's ok with you, I'd like to work on the performance issue. What\nI have in mind is commit all remote transactions in parallel instead\nof sequentially in the postgres_fdw transaction callback, as mentioned\nabove, but I think that would improve the performance even for\none-phase commit that we already have. Maybe I'm missing something,\nthough.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 7 Oct 2021 19:47:44 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "\n\nOn 2021/10/07 19:47, Etsuro Fujita wrote:\n> Hi,\n> \n> On Thu, Oct 7, 2021 at 1:29 PM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n>> That said, if we're going to initially support it on postgres_fdw, which is simpler\n>> than the latest patches, we need to ensure that abnormalities and errors\n>> are properly handled\n\nYes. One idea for this is to include the information required to resolve\noutstanding prepared transactions, in the transaction identifier that\nPREPARE TRANSACTION command uses. For example, we can use the XID of\nlocal transaction and the cluster ID of local server (e.g., cluster_name\nthat users specify uniquely can be used for that) as that information.\nIf the cluster_name of local server is \"server1\" and its XID is now 9999,\npostgres_fdw issues \"PREPARE TRANSACTION 'server1_9999'\" and\n\"COMMIT PREPARED 'server1_9999'\" to the foreign servers, to end those\nforeign transactions in two-phase way.\n\nIf some troubles happen, the prepared transaction with \"server1_9999\"\nmay remain unexpectedly in one foreign server. In this case we can\ndetermine whether to commit or rollback that outstanding transaction\nby checking whether the past transaction with XID 9999 was committed\nor rollbacked in the server \"server1\". If it's committed, the prepared\ntransaction also should be committed, so we should execute\n\"COMMIT PREPARED 'server1_9999'\". If it's rollbacked, the prepared\ntransaction also should be rollbacked. If it's in progress, we should\ndo nothing for that transaction.\n\npg_xact_status() can be used to check whether the transaction with\nthe specified XID was committed or rollbacked. But pg_xact_status()\ncan return invalid result if CLOG data for the specified XID has been\ntruncated by VACUUM FREEZE. To handle this case, we might need\nthe special table tracking the transaction status.\n\nDBA can use the above procedure and manually resolve the outstanding\nprepared transactions in foreign servers. Also probably we can implement\nthe function doing the procedure. If so, it might be good idea to make\nbackground worker or cron periodically execute the function.\n\n\n>> and prove that commit performance can be improved,\n>> e.g. if we can commit not in serial but also possible in parallel.\n> \n> If it's ok with you, I'd like to work on the performance issue. What\n> I have in mind is commit all remote transactions in parallel instead\n> of sequentially in the postgres_fdw transaction callback, as mentioned\n> above, but I think that would improve the performance even for\n> one-phase commit that we already have.\n\n+100\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 7 Oct 2021 23:37:20 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
},
{
"msg_contents": "Fujii-san,\n\nOn Thu, Oct 7, 2021 at 11:37 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/10/07 19:47, Etsuro Fujita wrote:\n> > On Thu, Oct 7, 2021 at 1:29 PM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> >> and prove that commit performance can be improved,\n> >> e.g. if we can commit not in serial but also possible in parallel.\n> >\n> > If it's ok with you, I'd like to work on the performance issue. What\n> > I have in mind is commit all remote transactions in parallel instead\n> > of sequentially in the postgres_fdw transaction callback, as mentioned\n> > above, but I think that would improve the performance even for\n> > one-phase commit that we already have.\n>\n> +100\n\nI’ve started working on this. Once I have a (POC) patch, I’ll post it\nin a new thread, as I think it can be discussed separately.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sun, 10 Oct 2021 17:12:48 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transactions involving multiple postgres foreign servers, take 2"
}
] |
[
{
"msg_contents": "Hi!\n\nI've received a bug report from a PostgreSQL user that psql 12.1 failed to connect to a PostgreSQL 12.1 server, with the following error message:\n\npsql: error: could not connect to server: FATAL: unsupported frontend protocol 1234.5679: server supports 2.0 to 3.0\n\nAfter inspecting a TCP dump, I realised that libpq apparently sent a GSS startup packet, got 'N' (not supported) response, then tried a SSL startup packet, at which point the server sent an error.\n\nThe bug report is available at the following URL:\nhttps://github.com/PostgresApp/PostgresApp/issues/537\n\nAfter inspecting postmaster.c, it seems that postmaster only allows a single negotiation attempt, but libpq doesn't know that.\n\nI'm not familiar with GSS, but from my naive point of view it would seem that we should fix this issue as follows:\n\n1) On the server side, allow multiple negotiation attempts (eg. allow SSL negotiation after a rejected GSS negotiation attempt)\n\n2) On the client side, detect an error message after the second negotiation attempt, and reconnect, to ensure compatibility with servers that do not support multiple attempts yet.\n\nI've attached two proposed patches with these changes.\n\nBest regards,\nJakob",
"msg_date": "Fri, 6 Dec 2019 14:25:46 +0100",
"msg_from": "Jakob Egger <jakob@eggerapps.at>",
"msg_from_op": true,
"msg_subject": "Frontend/Backend Protocol: SSL / GSS Protocol Negotiation Problem "
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 02:25:46PM +0100, Jakob Egger wrote:\n> I've received a bug report from a PostgreSQL user that psql 12.1\n> failed to connect to a PostgreSQL 12.1 server, with the following\n> error message: \n> \n> psql: error: could not connect to server: FATAL: unsupported\n> frontend protocol 1234.5679: server supports 2.0 to 3.0\n\nAndrew Gierth has reported this issue, and has provided a patch:\nhttps://www.postgresql.org/message-id/87h82kzwqn.fsf@news-spur.riddles.org.uk\nIf you could help with it, that would be great.\n--\nMichael",
"msg_date": "Fri, 6 Dec 2019 23:08:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Frontend/Backend Protocol: SSL / GSS Protocol Negotiation Problem"
},
{
"msg_contents": "\n> On 6. Dec 2019, at 15:08, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Dec 06, 2019 at 02:25:46PM +0100, Jakob Egger wrote:\n>> I've received a bug report from a PostgreSQL user that psql 12.1\n>> failed to connect to a PostgreSQL 12.1 server, with the following\n>> error message: \n>> \n>> psql: error: could not connect to server: FATAL: unsupported\n>> frontend protocol 1234.5679: server supports 2.0 to 3.0\n> \n> Andrew Gierth has reported this issue, and has provided a patch:\n> https://www.postgresql.org/message-id/87h82kzwqn.fsf@news-spur.riddles.org.uk\n> If you could help with it, that would be great.\n> --\n> Michael\n\n\nThanks for pointing me to the right thread! My server side fix is similar to Andrews, but Andrews is maybe a bit more elegant.\n\nBut this also needs to be fixed on the client side as well, otherwise affected clients can't connect to older servers anymore.\n\nMy second patch attempts to fix the issue on the client side.\n\nI'll respond to the other thread as well.\n\nJakob\n\n",
"msg_date": "Fri, 6 Dec 2019 15:27:33 +0100",
"msg_from": "Jakob Egger <jakob@eggerapps.at>",
"msg_from_op": true,
"msg_subject": "Re: Frontend/Backend Protocol: SSL / GSS Protocol Negotiation Problem "
},
{
"msg_contents": ">>>>> \"Jakob\" == Jakob Egger <jakob@eggerapps.at> writes:\n\n Jakob> But this also needs to be fixed on the client side as well,\n Jakob> otherwise affected clients can't connect to older servers\n Jakob> anymore.\n\nThere's a workaround, which is to set PGGSSENCMODE=disable on the\nclient.\n\nIt would be far better to avoid complicating the client side with this\nif we can possibly do so.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 06 Dec 2019 15:45:29 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Frontend/Backend Protocol: SSL / GSS Protocol Negotiation Problem"
},
{
"msg_contents": "\n> On 6. Dec 2019, at 16:45, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n> \n>>>>>> \"Jakob\" == Jakob Egger <jakob@eggerapps.at> writes:\n> \n> Jakob> But this also needs to be fixed on the client side as well,\n> Jakob> otherwise affected clients can't connect to older servers\n> Jakob> anymore.\n> \n> There's a workaround, which is to set PGGSSENCMODE=disable on the\n> client.\n> \n> It would be far better to avoid complicating the client side with this\n> if we can possibly do so.\n\nAs far as I understand, the bug impacts clients version 12.0 or later who have Kerberos when connecting to 12.0 or 12.1 servers that don't have Kerberos. (Assuming that the bug will be fixed server side in 12.2)\n\nI don't know how many people use Kerberos, so I can't say if it's worth the additional complexiity to work around the bug.\n\nIn any case, the workaround should probably be documented somewhere:\n\nIf you try to connect to a PostgreSQL 12.0 or 12.1 server and you get the following error message:\n\npsql: error: could not connect to server: FATAL: unsupported frontend protocol 1234.5679: server supports 2.0 to 3.0\n\nThen you need to use the connection parameter gssencmode=disable\n\nIs there a place where such workarounds are documented, or do we rely on Google indexing the mailing list archive?\n\nBest regards,\n\nJakob\n\n",
"msg_date": "Fri, 6 Dec 2019 21:06:13 +0100",
"msg_from": "Jakob Egger <jakob@eggerapps.at>",
"msg_from_op": true,
"msg_subject": "Re: Frontend/Backend Protocol: SSL / GSS Protocol Negotiation Problem "
}
] |
[
{
"msg_contents": "Hi,\n\nThis topic was discussed several times, with the most recent\ndiscussions found at [1] and [2]. Based on those discussions, my\nunderstanding is that the current approach in BASE_BACKUP has too many\ndrawbacks and we should instead do this check in the backend. I've\nbeen working using such approach at VMware, and I'm submitting it here\nto discuss the approach and rationales, and hopefully have such a\nfeature integrated.\n\nFirst, this was originally developed as an extension. It means that\nthe check is performed using an SRF. That's maybe not the best\napproach, as a transaction has be kept for the total processing time.\nIt can be leveraged by checking each relation independently, but\nthat's still not ideal. Maybe using some utility commands (as part of\nVACUUM or a new CHECK command for instance) would be a better\napproach.\n\nThis brings the second consideration: how to report the list corrupted\nblocks to end users. As I said this is for now returned via the SRF,\nbut this is clearly not ideal and should rather be made available more\nglobally. One usage of this information could be block level\nrecovery. I'm Cc-ing Sawada-san, as I know he's working on this and\nmentioned me that he had ideas on passing the list of corrupted blocks\nusing the stat collector.\n\nFinally, the read and locking considerations. I tried to cover that\nextensively in the comments, but here are some details on how I tried\nto make the check safe while trying to keep the overhead as low as\npossible. First thing is that this is only doing buffered reads,\nwithout any attempt to discard OS cache. Therefore, any discrepancy\nbetween the OS cache and the disk cannot be detected unless you do\nother actions, such as sync / drop_caches on GNU/Linux.\n\nAn access share lock on the currently checked relation is held,\nmeaning that it can't get deleted/truncated. The total number of\nblocks for the given fork is retrieved first, so any new block will be\nignored. Such new blocks are considered out of scope as being written\nafter the start of the check.\n\nEach time a buffer is being checked, the target buffer mapping\npartition lock is acquired in shared mode, to prevent concurrent\neviction. If the buffer is found in shared buffers, it's pinned and\nreleased immediately, just to get the state. If the buffer is found\ndirty, no check is performed as it'll be written to disk by the\ncheckpointer, or during recovery in case of unclean shutdown.\nOtherwise, an IO lock is held while the the buffer is being read in a\nprivate buffer. IO Lock and buffer mapping lock are released and then\nthe check is performed.\n\nIf the buffer is not found in shared buffers, the buffer mapping\npartition lock is released immediately and the block is read from\ndisk. It's therefore possible to get a false positive here, as the\nblock could be concurrently read, modified and partially written to\ndisk. So, if an error is detected in this case, the check is\nrestarted from scratch and if the buffer is still not found in shared\nbuffers, the read will be done while still holding the buffer mapping\npartition lock to make sure that it can't get concurrently loaded and\nmodified. This is an optimistic approach to avoid performance\noverhead, assuming that there shouldn't be a lot of positive, and\nfalse positive possibility is very narrow.\n\nThe check consists of simply comparing the stored and computed\nchecksum, with an additional check that the page is really new (using\nPageIsVerified) if it's found as PageIsNew(). Since this is done\nafter releasing all locks, we could definitely add more checks without\ncausing much overhead, like pd_lower/pd_upper sanity. I prefer to\nkeep the check simple for now and rather focus on the general\napproach.\n\nFinally, I also reused vacuum costing GUC (for simplicity) and\napproach to add some throttling.\n\nI'm attaching a patch that adds a new pg_check_relation() sql function\nto perform a check of one or all relations, and some simple regression\ntests.\n\n[1] https://www.postgresql.org/message-id/flat/1532606373.3422.5.camel%40credativ.de\n[2] https://www.postgresql.org/message-id/flat/20190326170820.6sylklg7eh6uhabd%40alap3.anarazel.de",
"msg_date": "Fri, 6 Dec 2019 15:50:58 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 9:51 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> This topic was discussed several times, with the most recent\n> discussions found at [1] and [2]. Based on those discussions, my\n> understanding is that the current approach in BASE_BACKUP has too many\n> drawbacks and we should instead do this check in the backend.\n\nGood idea.\n\n> This brings the second consideration: how to report the list corrupted\n> blocks to end users. As I said this is for now returned via the SRF,\n> but this is clearly not ideal and should rather be made available more\n> globally.\n\nSome people might prefer notices, because you can get those while the\nthing is still running, rather than a result set, which you will only\nsee when the query finishes. Other people might prefer an SRF, because\nthey want to have the data in structured form so that they can\npostprocess it. Not sure what you mean by \"more globally.\" I guess one\nidea would be to provide a way to kick this off in the background via\na background worker or similar and then have it put the results in a\ntable. But that might fail if there are checksum errors in the\ncatalogs themselves.\n\nI don't really know what's best.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Dec 2019 11:21:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 5:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Dec 6, 2019 at 9:51 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > This brings the second consideration: how to report the list corrupted\n> > blocks to end users. As I said this is for now returned via the SRF,\n> > but this is clearly not ideal and should rather be made available more\n> > globally.\n>\n> Some people might prefer notices, because you can get those while the\n> thing is still running, rather than a result set, which you will only\n> see when the query finishes. Other people might prefer an SRF, because\n> they want to have the data in structured form so that they can\n> postprocess it. Not sure what you mean by \"more globally.\"\n\nI meant having the results available system-wide, not only to the\ncaller. I think that emitting a log/notice level should always be\ndone on top on whatever other communication facility we're using.\n\n> I guess one\n> idea would be to provide a way to kick this off in the background via\n> a background worker or similar and then have it put the results in a\n> table. But that might fail if there are checksum errors in the\n> catalogs themselves.\n\nYes that's a concern. We could maintain a list in (dynamic) shared\nmemory with a simple SQL wrapper to read the data, but that would be\nlost with a crash/restart. Or use\npgstat_report_checksum_failures_in_db(), modifying it to get an\nrelfilenode, bocknum and forknum and append that to some flat files,\nhoping that it won't get corrupted either.\n\n\n",
"msg_date": "Mon, 9 Dec 2019 19:02:43 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Dec 09, 2019 at 07:02:43PM +0100, Julien Rouhaud wrote:\n> On Mon, Dec 9, 2019 at 5:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Some people might prefer notices, because you can get those while the\n>> thing is still running, rather than a result set, which you will only\n>> see when the query finishes. Other people might prefer an SRF, because\n>> they want to have the data in structured form so that they can\n>> postprocess it. Not sure what you mean by \"more globally.\"\n> \n> I meant having the results available system-wide, not only to the\n> caller. I think that emitting a log/notice level should always be\n> done on top on whatever other communication facility we're using.\n\nThe problem of notice and logs is that they tend to be ignored. Now I\ndon't see no problems either in adding something into the logs which\ncan be found later on for parsing on top of a SRF returned by the\ncaller which includes all the corruption details, say with pgbadger\nor your friendly neighborhood grep. I think that any backend function\nshould also make sure to call pgstat_report_checksum_failure() to\nreport a report visible at database-level in the catalogs, so as it is\npossible to use that as a cheap high-level warning. The details of\nthe failures could always be dug from the logs or the result of the\nfunction itself after finding out that something is wrong in\npg_stat_database.\n\n>> I guess one\n>> idea would be to provide a way to kick this off in the background via\n>> a background worker or similar and then have it put the results in a\n>> table. But that might fail if there are checksum errors in the\n>> catalogs themselves.\n> \n> Yes that's a concern. We could maintain a list in (dynamic) shared\n> memory with a simple SQL wrapper to read the data, but that would be\n> lost with a crash/restart. Or use\n> pgstat_report_checksum_failures_in_db(), modifying it to get an\n> relfilenode, bocknum and forknum and append that to some flat files,\n> hoping that it won't get corrupted either.\n\nIf a lot of blocks are corrupted, that could bloat things. Hence some\nretention policies would be necessary, and that's tricky to define and\nconfigure properly. I'd tend to be in the school of just logging the\ninformation and be done with it, because that's simple and because you\nwon't need to worry about any more configuration. Doing the work in\nthe background is still separate than a SQL-callable function though,\nno? In this case you need a connection to a database to allow the\nchecksum verification to happen on a relfilenode based on the relation\nto check, also because you want the thing to be safe concurrently\n(a background work here is a combo with a bgworker triggering dynamic\nchildren working on one database, not necessarily something that needs\nto be in core).\n--\nMichael",
"msg_date": "Tue, 10 Dec 2019 11:26:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 3:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 09, 2019 at 07:02:43PM +0100, Julien Rouhaud wrote:\n> > On Mon, Dec 9, 2019 at 5:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> Some people might prefer notices, because you can get those while the\n> >> thing is still running, rather than a result set, which you will only\n> >> see when the query finishes. Other people might prefer an SRF, because\n> >> they want to have the data in structured form so that they can\n> >> postprocess it. Not sure what you mean by \"more globally.\"\n> >\n> > I meant having the results available system-wide, not only to the\n> > caller. I think that emitting a log/notice level should always be\n> > done on top on whatever other communication facility we're using.\n>\n> The problem of notice and logs is that they tend to be ignored. Now I\n> don't see no problems either in adding something into the logs which\n> can be found later on for parsing on top of a SRF returned by the\n> caller which includes all the corruption details, say with pgbadger\n> or your friendly neighborhood grep. I think that any backend function\n> should also make sure to call pgstat_report_checksum_failure() to\n> report a report visible at database-level in the catalogs, so as it is\n> possible to use that as a cheap high-level warning. The details of\n> the failures could always be dug from the logs or the result of the\n> function itself after finding out that something is wrong in\n> pg_stat_database.\n\nI agree that adding extra information in the logs and calling\npgstat_report_checksum_failure is a must do, and I changed that\nlocally. However, I doubt that the logs is the right place to find\nthe details of corrupted blocks. There's no guarantee that the file\nwill be accessible to the DBA, nor that the content won't get\ntruncated by the time it's needed. I really think that corruption is\nimportant enough to justify more specific location.\n\n> >> I guess one\n> >> idea would be to provide a way to kick this off in the background via\n> >> a background worker or similar and then have it put the results in a\n> >> table. But that might fail if there are checksum errors in the\n> >> catalogs themselves.\n> >\n> > Yes that's a concern. We could maintain a list in (dynamic) shared\n> > memory with a simple SQL wrapper to read the data, but that would be\n> > lost with a crash/restart. Or use\n> > pgstat_report_checksum_failures_in_db(), modifying it to get an\n> > relfilenode, bocknum and forknum and append that to some flat files,\n> > hoping that it won't get corrupted either.\n>\n> If a lot of blocks are corrupted, that could bloat things. Hence some\n> retention policies would be necessary, and that's tricky to define and\n> configure properly. I'd tend to be in the school of just logging the\n> information and be done with it, because that's simple and because you\n> won't need to worry about any more configuration.\n\nIf the number of corrupted blocks becomes high enough to excessively\nbloat things, it's likely that the instance is doomed anyway, so I'm\nnot especially concerned about it.\n\n\n",
"msg_date": "Tue, 10 Dec 2019 11:12:34 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 11:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> This topic was discussed several times, with the most recent\n> discussions found at [1] and [2]. Based on those discussions, my\n> understanding is that the current approach in BASE_BACKUP has too many\n> drawbacks and we should instead do this check in the backend. I've\n> been working using such approach at VMware, and I'm submitting it here\n> to discuss the approach and rationales, and hopefully have such a\n> feature integrated.\n\nThank you for working on this!\n\n>\n> First, this was originally developed as an extension. It means that\n> the check is performed using an SRF. That's maybe not the best\n> approach, as a transaction has be kept for the total processing time.\n> It can be leveraged by checking each relation independently, but\n> that's still not ideal. Maybe using some utility commands (as part of\n> VACUUM or a new CHECK command for instance) would be a better\n> approach.\n>\n> This brings the second consideration: how to report the list corrupted\n> blocks to end users. As I said this is for now returned via the SRF,\n> but this is clearly not ideal and should rather be made available more\n> globally. One usage of this information could be block level\n> recovery. I'm Cc-ing Sawada-san, as I know he's working on this and\n> mentioned me that he had ideas on passing the list of corrupted blocks\n> using the stat collector.\n\nYes it's necessary the list of corrupted pages for single page\nrecovery. Apart from single page recovery I think it's helpful for DBA\nif they can find the corrupted blocks in the server logs and on a\nsystem view.\n\nI've also tried to report corrupted pages to the stats collector\nduring I researching single page recovery in PostgreSQL but one\nproblem is that the statistics in the stats collector is cleared when\ncrash recovery. I want the information of block corruption to survive\neven when the server down. And we might want to add checksums to the\npermanent file having information of database corruption. The\ncorrectness of these information would be important because we can fix\na database by restoring some tables from a logical backup or by doing\nreindex etc as long as we have a non-broken information of database\ncorruption.\n\n>\n> Finally, the read and locking considerations. I tried to cover that\n> extensively in the comments, but here are some details on how I tried\n> to make the check safe while trying to keep the overhead as low as\n> possible. First thing is that this is only doing buffered reads,\n> without any attempt to discard OS cache. Therefore, any discrepancy\n> between the OS cache and the disk cannot be detected unless you do\n> other actions, such as sync / drop_caches on GNU/Linux.\n>\n> An access share lock on the currently checked relation is held,\n> meaning that it can't get deleted/truncated. The total number of\n> blocks for the given fork is retrieved first, so any new block will be\n> ignored. Such new blocks are considered out of scope as being written\n> after the start of the check.\n>\n> Each time a buffer is being checked, the target buffer mapping\n> partition lock is acquired in shared mode, to prevent concurrent\n> eviction. If the buffer is found in shared buffers, it's pinned and\n> released immediately, just to get the state.\n\nI wonder if there is possibility that blocks on disk can be corrupted\neven if these are loaded to the shared buffer. ISTM the above method\ncannot detect such corruption. Reading and checking blocks fast is\nattractive but I thought it's also important to check blocks precisely\nwithout overlooking.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Dec 2019 12:22:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 4:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Dec 6, 2019 at 11:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > This brings the second consideration: how to report the list corrupted\n> > blocks to end users. As I said this is for now returned via the SRF,\n> > but this is clearly not ideal and should rather be made available more\n> > globally. One usage of this information could be block level\n> > recovery. I'm Cc-ing Sawada-san, as I know he's working on this and\n> > mentioned me that he had ideas on passing the list of corrupted blocks\n> > using the stat collector.\n>\n> Yes it's necessary the list of corrupted pages for single page\n> recovery. Apart from single page recovery I think it's helpful for DBA\n> if they can find the corrupted blocks in the server logs and on a\n> system view.\n>\n> I've also tried to report corrupted pages to the stats collector\n> during I researching single page recovery in PostgreSQL but one\n> problem is that the statistics in the stats collector is cleared when\n> crash recovery. I want the information of block corruption to survive\n> even when the server down.\n\nYes, having the list of corrupted blocks surviving a crash-and-restart\ncycle, and also available after a clean shutdown is definitely\nimportant.\n\n> And we might want to add checksums to the\n> permanent file having information of database corruption. The\n> correctness of these information would be important because we can fix\n> a database by restoring some tables from a logical backup or by doing\n> reindex etc as long as we have a non-broken information of database\n> corruption.\n\nAgreed\n\n> > Finally, the read and locking considerations. I tried to cover that\n> > extensively in the comments, but here are some details on how I tried\n> > to make the check safe while trying to keep the overhead as low as\n> > possible. First thing is that this is only doing buffered reads,\n> > without any attempt to discard OS cache. Therefore, any discrepancy\n> > between the OS cache and the disk cannot be detected unless you do\n> > other actions, such as sync / drop_caches on GNU/Linux.\n> >\n> > An access share lock on the currently checked relation is held,\n> > meaning that it can't get deleted/truncated. The total number of\n> > blocks for the given fork is retrieved first, so any new block will be\n> > ignored. Such new blocks are considered out of scope as being written\n> > after the start of the check.\n> >\n> > Each time a buffer is being checked, the target buffer mapping\n> > partition lock is acquired in shared mode, to prevent concurrent\n> > eviction. If the buffer is found in shared buffers, it's pinned and\n> > released immediately, just to get the state.\n>\n> I wonder if there is possibility that blocks on disk can be corrupted\n> even if these are loaded to the shared buffer. ISTM the above method\n> cannot detect such corruption. Reading and checking blocks fast is\n> attractive but I thought it's also important to check blocks precisely\n> without overlooking.\n\nIt can definitely happen, and it's the usual doomsday scenario:\ndatabase is working fine for months, then postgres is restarted say\nfor a minor version upgrade and then boom the most populars blocks\nthat are constantly used in read only were corrupted on disk but never\nevicted from shared buffers, and you have a major outage. I have\nwitnessed that unfortunately too many times. This is especially bad\nas in this kind of scenario, you typically discover the corruption\nonce all backup only contains the corrupted blocks.\n\nNote that in the approach I'm suggesting, I do verify blocks that are\nloaded in shared buffers, I only ignore the dirty blocks, as they'll\nbe written by the checkpointer or recovery process in case of unclean\nshutdown. A bufferpin isn't necessary to avoid torn page read, an IO\nlock also guarantees that and causes less overhead. The included TAP\ntest should also detect the corruption of a\npresent-in-shared-buffers-non-dirty block. It could however be\nimproved eg. by calling pg_prewarm to make sure that it's indeed in\nshared_buffers, and also do the same test after a clean restart to\nmake sure that it's hitting the not-in-shared-buffers case.\n\n\n",
"msg_date": "Tue, 24 Dec 2019 08:09:07 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, 24 Dec 2019 at 16:09, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Dec 24, 2019 at 4:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Dec 6, 2019 at 11:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > This brings the second consideration: how to report the list corrupted\n> > > blocks to end users. As I said this is for now returned via the SRF,\n> > > but this is clearly not ideal and should rather be made available more\n> > > globally. One usage of this information could be block level\n> > > recovery. I'm Cc-ing Sawada-san, as I know he's working on this and\n> > > mentioned me that he had ideas on passing the list of corrupted blocks\n> > > using the stat collector.\n> >\n> > Yes it's necessary the list of corrupted pages for single page\n> > recovery. Apart from single page recovery I think it's helpful for DBA\n> > if they can find the corrupted blocks in the server logs and on a\n> > system view.\n> >\n> > I've also tried to report corrupted pages to the stats collector\n> > during I researching single page recovery in PostgreSQL but one\n> > problem is that the statistics in the stats collector is cleared when\n> > crash recovery. I want the information of block corruption to survive\n> > even when the server down.\n>\n> Yes, having the list of corrupted blocks surviving a crash-and-restart\n> cycle, and also available after a clean shutdown is definitely\n> important.\n>\n> > And we might want to add checksums to the\n> > permanent file having information of database corruption. The\n> > correctness of these information would be important because we can fix\n> > a database by restoring some tables from a logical backup or by doing\n> > reindex etc as long as we have a non-broken information of database\n> > corruption.\n>\n> Agreed\n>\n> > > Finally, the read and locking considerations. I tried to cover that\n> > > extensively in the comments, but here are some details on how I tried\n> > > to make the check safe while trying to keep the overhead as low as\n> > > possible. First thing is that this is only doing buffered reads,\n> > > without any attempt to discard OS cache. Therefore, any discrepancy\n> > > between the OS cache and the disk cannot be detected unless you do\n> > > other actions, such as sync / drop_caches on GNU/Linux.\n> > >\n> > > An access share lock on the currently checked relation is held,\n> > > meaning that it can't get deleted/truncated. The total number of\n> > > blocks for the given fork is retrieved first, so any new block will be\n> > > ignored. Such new blocks are considered out of scope as being written\n> > > after the start of the check.\n> > >\n> > > Each time a buffer is being checked, the target buffer mapping\n> > > partition lock is acquired in shared mode, to prevent concurrent\n> > > eviction. If the buffer is found in shared buffers, it's pinned and\n> > > released immediately, just to get the state.\n> >\n> > I wonder if there is possibility that blocks on disk can be corrupted\n> > even if these are loaded to the shared buffer. ISTM the above method\n> > cannot detect such corruption. Reading and checking blocks fast is\n> > attractive but I thought it's also important to check blocks precisely\n> > without overlooking.\n>\n> It can definitely happen, and it's the usual doomsday scenario:\n> database is working fine for months, then postgres is restarted say\n> for a minor version upgrade and then boom the most populars blocks\n> that are constantly used in read only were corrupted on disk but never\n> evicted from shared buffers, and you have a major outage. I have\n> witnessed that unfortunately too many times. This is especially bad\n> as in this kind of scenario, you typically discover the corruption\n> once all backup only contains the corrupted blocks.\n>\n> Note that in the approach I'm suggesting, I do verify blocks that are\n> loaded in shared buffers, I only ignore the dirty blocks, as they'll\n> be written by the checkpointer or recovery process in case of unclean\n> shutdown. A bufferpin isn't necessary to avoid torn page read, an IO\n> lock also guarantees that and causes less overhead. The included TAP\n> test should also detect the corruption of a\n> present-in-shared-buffers-non-dirty block. It could however be\n> improved eg. by calling pg_prewarm to make sure that it's indeed in\n> shared_buffers, and also do the same test after a clean restart to\n> make sure that it's hitting the not-in-shared-buffers case.\n\nIt reads blocks from disk even if they are loaded in shared buffer.\nNow I understand. Thanks!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Dec 2019 11:23:50 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 11:12:34AM +0100, Julien Rouhaud wrote:\n> On Tue, Dec 10, 2019 at 3:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Dec 09, 2019 at 07:02:43PM +0100, Julien Rouhaud wrote:\n> > > On Mon, Dec 9, 2019 at 5:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >> Some people might prefer notices, because you can get those while the\n> > >> thing is still running, rather than a result set, which you will only\n> > >> see when the query finishes. Other people might prefer an SRF, because\n> > >> they want to have the data in structured form so that they can\n> > >> postprocess it. Not sure what you mean by \"more globally.\"\n> > >\n> > > I meant having the results available system-wide, not only to the\n> > > caller. I think that emitting a log/notice level should always be\n> > > done on top on whatever other communication facility we're using.\n> >\n> > The problem of notice and logs is that they tend to be ignored. Now I\n> > don't see no problems either in adding something into the logs which\n> > can be found later on for parsing on top of a SRF returned by the\n> > caller which includes all the corruption details, say with pgbadger\n> > or your friendly neighborhood grep. I think that any backend function\n> > should also make sure to call pgstat_report_checksum_failure() to\n> > report a report visible at database-level in the catalogs, so as it is\n> > possible to use that as a cheap high-level warning. The details of\n> > the failures could always be dug from the logs or the result of the\n> > function itself after finding out that something is wrong in\n> > pg_stat_database.\n>\n> I agree that adding extra information in the logs and calling\n> pgstat_report_checksum_failure is a must do, and I changed that\n> locally. However, I doubt that the logs is the right place to find\n> the details of corrupted blocks. There's no guarantee that the file\n> will be accessible to the DBA, nor that the content won't get\n> truncated by the time it's needed. I really think that corruption is\n> important enough to justify more specific location.\n\n\nThe cfbot reported a build failure, so here's a rebased v2 which also contains\nthe pg_stat_report_failure() call and extra log info.",
"msg_date": "Wed, 11 Mar 2020 08:18:23 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Mar 11, 2020 at 08:18:23AM +0100, Julien Rouhaud wrote:\n> The cfbot reported a build failure, so here's a rebased v2 which also contains\n> the pg_stat_report_failure() call and extra log info.\n\n+ * - if a block is not found in shared_buffers, the LWLock is relased and the\n+ * block is read from disk without taking any lock. If an error is detected,\n+ * the read block will be discarded and retrieved again while holding the\n+ * LWLock. This is because an error due to concurrent write is possible but\n+ * very unlikely, so it's better to have an optimistic approach to limit\n+ * locking overhead\nThis can be risky with false positives, no? With a large amount of\nshared buffer eviction you actually increase the risk of torn page\nreads. Instead of a logic relying on partition mapping locks, which\ncould be unwise on performance grounds, did you consider different\napproaches? For example a kind of pre-emptive lock on the page in\nstorage to prevent any shared buffer operation to happen while the\nblock is read from storage, that would act like a barrier.\n\n+ * Vacuum's GUCs are used to avoid consuming too much resources while running\n+ * this tool.\nShouldn't this involve separate GUCs instead of the VACUUM ones? I\nguess that this leads to the fact that this function may be better as\na contrib module, with the addition of some better-suited APIs in core\n(see paragraph above).\n\n+Run\n+ make check\n+or\n+ make installcheck\nWhy is installcheck mentioned here?\n\nI don't think that it is appropriate to place the SQL-callable part in\nthe existing checksum.c. I would suggest instead a new file, say\nchecksumfuncs.c in src/backend/utils/adt/, holding any SQL functions\nfor checksums.\n\n-SUBDIRS = perl regress isolation modules authentication recovery\n subscription\n+SUBDIRS = perl regress isolation modules authentication check_relation \\\n+ recovery subscription\nIt seems to me that this test would be a good fit for\nsrc/test/modules/test_misc/.\n\n+static void\n+check_all_relations(TupleDesc tupdesc, Tuplestorestate *tupstore,\n+ ForkNumber forknum)\nPer the argument of bloat, I think that I would remove\ncheck_all_relation() as this function could take a very long time to\nrun, and just make the SQL function strict.\n\n+ * - if a block is dirty in shared_buffers, it's ignored as it'll be flushed to\n+ * disk either before the end of the next checkpoint or during recovery in\n+ * case of unsafe shutdown\nWouldn't it actually be a good thing to check that the page on storage\nis fine in this case? This depends on the system settings and the\ncheckpoint frequency, but checkpoint_timeout can be extended up to 1\nday. And plenty of things could happen to the storage in one day,\nincluding a base backup that includes a corrupted page on storage,\nthat this function would not be able to detect.\n\n+ * - if a block is otherwise found in shared_buffers, an IO lock is taken on\n+ * the block and the block is then read from storage, ignoring the block in\n+ * shared_buffers\nYeah, I think that you are right here to check the page on storage\nanyway.\n\n+ * we detect if a block is in shared_buffers or not. See get_buffer()\n+ * comments for more details about the locking strategy.\nget_buffer() does not exist in your patch, check_get_buffer() does.\n\n+ * - if a block is not found in shared_buffers, the LWLock is relased and the\n[...]\n+ * To avoid torn page and possible false postives when reading data, and\nTypos.\n\n+ if (!DataChecksumsEnabled())\n+ elog(ERROR, \"Data checksums are not enabled\");\nNote that elog() is for the class of errors which are never expected,\nand here a caller of pg_check_relation() with checksums disabled can\ntrigger that. So you need to call ereport() with\nERRCODE_FEATURE_NOT_SUPPORTED.\n\n+ * - if a block is dirty in shared_buffers, it's ignored as it'll be flushed to\n+ * disk either before the end of the next checkpoint or during recovery in\n+ * case of unsafe shutdown\nNot sure that the indentation is going to react well on that part of\nthe patch, perhaps it would be better to add some \"/*-------\" at the\nbeginning and end of the comment block to tell pgindent to ignore this\npart?\n\nBased on the feedback gathered on this thread, I guess that you should\nhave a SRF returning the list of broken blocks, as well as NOTICE\nmessages. Another thing to consider is the addition of a range\nargument to only check a certain portion of the blocks, say one\nsegment file at a time, etc. Fine by me to not include in the first\nflavor of the patch.\n\nThe patch needs documentation.\n--\nMichael",
"msg_date": "Mon, 16 Mar 2020 12:29:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 16:18, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Dec 10, 2019 at 11:12:34AM +0100, Julien Rouhaud wrote:\n> > On Tue, Dec 10, 2019 at 3:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Mon, Dec 09, 2019 at 07:02:43PM +0100, Julien Rouhaud wrote:\n> > > > On Mon, Dec 9, 2019 at 5:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >> Some people might prefer notices, because you can get those while the\n> > > >> thing is still running, rather than a result set, which you will only\n> > > >> see when the query finishes. Other people might prefer an SRF, because\n> > > >> they want to have the data in structured form so that they can\n> > > >> postprocess it. Not sure what you mean by \"more globally.\"\n> > > >\n> > > > I meant having the results available system-wide, not only to the\n> > > > caller. I think that emitting a log/notice level should always be\n> > > > done on top on whatever other communication facility we're using.\n> > >\n> > > The problem of notice and logs is that they tend to be ignored. Now I\n> > > don't see no problems either in adding something into the logs which\n> > > can be found later on for parsing on top of a SRF returned by the\n> > > caller which includes all the corruption details, say with pgbadger\n> > > or your friendly neighborhood grep. I think that any backend function\n> > > should also make sure to call pgstat_report_checksum_failure() to\n> > > report a report visible at database-level in the catalogs, so as it is\n> > > possible to use that as a cheap high-level warning. The details of\n> > > the failures could always be dug from the logs or the result of the\n> > > function itself after finding out that something is wrong in\n> > > pg_stat_database.\n> >\n> > I agree that adding extra information in the logs and calling\n> > pgstat_report_checksum_failure is a must do, and I changed that\n> > locally. However, I doubt that the logs is the right place to find\n> > the details of corrupted blocks. There's no guarantee that the file\n> > will be accessible to the DBA, nor that the content won't get\n> > truncated by the time it's needed. I really think that corruption is\n> > important enough to justify more specific location.\n>\n>\n> The cfbot reported a build failure, so here's a rebased v2 which also contains\n> the pg_stat_report_failure() call and extra log info.\n\nIn addition to comments from Michael-san, here are my comments:\n\n1.\n+ if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_SERVER_FILES))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"only superuser or a member of the\npg_read_server_files role may use this function\")));\n+\n+ if (!DataChecksumsEnabled())\n+ elog(ERROR, \"Data checksums are not enabled\");\n\nI think it's better to reverse the order of the above checks.\n\n2.\n+#define CRF_COLS 5 /* Number of output arguments in the SRF */\n\nShould it be SRF_COLS?\n\n3.\n+static void\n+check_delay_point(void)\n+{\n+ /* Always check for interrupts */\n+ CHECK_FOR_INTERRUPTS();\n+\n+ /* Nap if appropriate */\n+ if (!InterruptPending && VacuumCostBalance >= VacuumCostLimit)\n+ {\n+ int msec;\n+\n+ msec = VacuumCostDelay * VacuumCostBalance / VacuumCostLimit;\n+ if (msec > VacuumCostDelay * 4)\n+ msec = VacuumCostDelay * 4;\n+\n+ pg_usleep(msec * 1000L);\n+\n+ VacuumCostBalance = 0;\n+\n+ /* Might have gotten an interrupt while sleeping */\n+ CHECK_FOR_INTERRUPTS();\n+ }\n+}\n\nEven if we use vacuum delay for this function, I think we need to set\nVacuumDelayActive and return if it's false, or it's better to just\nreturn if VacuumCostDelay == 0.\n\n4.\n+static void\n+check_all_relations(TupleDesc tupdesc, Tuplestorestate *tupstore,\n+ ForkNumber forknum)\n\nI also agree with Michael-san to remove this function. Instead we can\ncheck all relations by:\n\nselect pg_check_relation(oid) from pg_class;\n\n6.\nOther typos\n\ns/dirted/dirtied/\ns/explictly/explicitly/\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Mar 2020 13:53:35 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Thanks for the review Michael!\n\nOn Mon, Mar 16, 2020 at 12:29:28PM +0900, Michael Paquier wrote:\n> On Wed, Mar 11, 2020 at 08:18:23AM +0100, Julien Rouhaud wrote:\n> > The cfbot reported a build failure, so here's a rebased v2 which also contains\n> > the pg_stat_report_failure() call and extra log info.\n>\n> + * - if a block is not found in shared_buffers, the LWLock is relased and the\n> + * block is read from disk without taking any lock. If an error is detected,\n> + * the read block will be discarded and retrieved again while holding the\n> + * LWLock. This is because an error due to concurrent write is possible but\n> + * very unlikely, so it's better to have an optimistic approach to limit\n> + * locking overhead\n> This can be risky with false positives, no?\n\n\nDo you mean high probability of false positive in the 1st iteration, so running\nfrequently the recheck that can't have false positive, not that the 2nd check\ncan lead to false positive?\n\n\n> With a large amount of\n> shared buffer eviction you actually increase the risk of torn page\n> reads. Instead of a logic relying on partition mapping locks, which\n> could be unwise on performance grounds, did you consider different\n> approaches? For example a kind of pre-emptive lock on the page in\n> storage to prevent any shared buffer operation to happen while the\n> block is read from storage, that would act like a barrier.\n\n\nEven with a workload having a large shared_buffers eviction pattern, I don't\nthink that there's a high probability of hitting a torn page. Unless I'm\nmistaken it can only happen if all those steps happen concurrently to doing the\nblock read just after releasing the LWLock:\n\n- postgres read the same block in shared_buffers (including all the locking)\n- dirties it\n- writes part of the page\n\nIt's certainly possible, but it seems so unlikely that the optimistic lock-less\napproach seems like a very good tradeoff.\n\n\n>\n> + * Vacuum's GUCs are used to avoid consuming too much resources while running\n> + * this tool.\n> Shouldn't this involve separate GUCs instead of the VACUUM ones?\n\n\nWe could but the access pattern looked so similar that it looked like a good\nidea to avoid adding 2 new GUC for that to keep configuration simple. Unless\nthere are objections I'll add them in the next version.\n\n> I guess that this leads to the fact that this function may be better as\n> a contrib module, with the addition of some better-suited APIs in core\n> (see paragraph above).\n\n\nBelow?\n\n\n>\n> +Run\n> + make check\n> +or\n> + make installcheck\n> Why is installcheck mentioned here?\n\n\nOups, copy/pasto error from the original contrib module this stuff was\ninitially implemented as, will fix.\n\n>\n> I don't think that it is appropriate to place the SQL-callable part in\n> the existing checksum.c. I would suggest instead a new file, say\n> checksumfuncs.c in src/backend/utils/adt/, holding any SQL functions\n> for checksums.\n\n\nAgreed.\n\n>\n> -SUBDIRS = perl regress isolation modules authentication recovery\n> subscription\n> +SUBDIRS = perl regress isolation modules authentication check_relation \\\n> + recovery subscription\n> It seems to me that this test would be a good fit for\n> src/test/modules/test_misc/.\n\n\nWFM.\n\n>\n> +static void\n> +check_all_relations(TupleDesc tupdesc, Tuplestorestate *tupstore,\n> + ForkNumber forknum)\n> Per the argument of bloat, I think that I would remove\n> check_all_relation() as this function could take a very long time to\n> run, and just make the SQL function strict.\n\n\nNo objection.\n\n>\n> + * - if a block is dirty in shared_buffers, it's ignored as it'll be flushed to\n> + * disk either before the end of the next checkpoint or during recovery in\n> + * case of unsafe shutdown\n> Wouldn't it actually be a good thing to check that the page on storage\n> is fine in this case? This depends on the system settings and the\n> checkpoint frequency, but checkpoint_timeout can be extended up to 1\n> day. And plenty of things could happen to the storage in one day,\n> including a base backup that includes a corrupted page on storage,\n> that this function would not be able to detect.\n\n\nHow could that lead to data corruption? If postgres crashes before the\ncheckpoint completion, the block will be overwritten during recovery, and if a\nbase backup is taken the block will also be overwritten while replaying all the\nrequired WALs. Detecting a corrupted blocks in those cases would have the\nmerit of possibly warning about possibly broken hardware sooner, but it would\nalso make the check more expensive as the odds to prevent postgres from\nevicting a dirty block is way higher. Maybe an additional GUC for that?\n\nFor the record when I first tested that feature I did try to check dirty\nblocks, and it seemed that dirty blocks of shared relation were sometimes\nwrongly reported as corrupted. I didn't try to investigate more though.\n\n\n> + * we detect if a block is in shared_buffers or not. See get_buffer()\n> + * comments for more details about the locking strategy.\n> get_buffer() does not exist in your patch, check_get_buffer() does.\n\n\nOops, will fix.\n\n\n>\n\n> + * - if a block is not found in shared_buffers, the LWLock is relased and the\n> [...]\n> + * To avoid torn page and possible false postives when reading data, and\n> Typos.\n>\n> + if (!DataChecksumsEnabled())\n> + elog(ERROR, \"Data checksums are not enabled\");\n> Note that elog() is for the class of errors which are never expected,\n> and here a caller of pg_check_relation() with checksums disabled can\n> trigger that. So you need to call ereport() with\n> ERRCODE_FEATURE_NOT_SUPPORTED.\n\n\nIndeed, will fix.\n\n\n>\n> + * - if a block is dirty in shared_buffers, it's ignored as it'll be flushed to\n> + * disk either before the end of the next checkpoint or during recovery in\n> + * case of unsafe shutdown\n> Not sure that the indentation is going to react well on that part of\n> the patch, perhaps it would be better to add some \"/*-------\" at the\n> beginning and end of the comment block to tell pgindent to ignore this\n> part?\n\n\nOk. Although I think only the beginning comment is needed?\n\n>\n> Based on the feedback gathered on this thread, I guess that you should\n> have a SRF returning the list of broken blocks, as well as NOTICE\n> messages.\n\n\nThe current patch has an SRF and a WARNING message, do you want an additional\nNOTICE message or downgrade the existing one?\n\n> Another thing to consider is the addition of a range\n> argument to only check a certain portion of the blocks, say one\n> segment file at a time, etc. Fine by me to not include in the first\n> flavor of the patch.\n\n\nOk!\n\n\n> The patch needs documentation.\n\n\nI'll try to add some.\n\n\n",
"msg_date": "Mon, 16 Mar 2020 09:21:22 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 01:53:35PM +0900, Masahiko Sawada wrote:\n>\n> In addition to comments from Michael-san, here are my comments:\n>\n> 1.\n> + if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_SERVER_FILES))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"only superuser or a member of the\n> pg_read_server_files role may use this function\")));\n\n\nGood point! I'll fix it.\n\n\n> +\n> + if (!DataChecksumsEnabled())\n> + elog(ERROR, \"Data checksums are not enabled\");\n>\n> I think it's better to reverse the order of the above checks.\n\n\nIndeed.\n\n\n>\n> 2.\n> +#define CRF_COLS 5 /* Number of output arguments in the SRF */\n>\n> Should it be SRF_COLS?\n\n\nOops, will fix.\n\n\n>\n> 3.\n> +static void\n> +check_delay_point(void)\n> +{\n> + /* Always check for interrupts */\n> + CHECK_FOR_INTERRUPTS();\n> +\n> + /* Nap if appropriate */\n> + if (!InterruptPending && VacuumCostBalance >= VacuumCostLimit)\n> + {\n> + int msec;\n> +\n> + msec = VacuumCostDelay * VacuumCostBalance / VacuumCostLimit;\n> + if (msec > VacuumCostDelay * 4)\n> + msec = VacuumCostDelay * 4;\n> +\n> + pg_usleep(msec * 1000L);\n> +\n> + VacuumCostBalance = 0;\n> +\n> + /* Might have gotten an interrupt while sleeping */\n> + CHECK_FOR_INTERRUPTS();\n> + }\n> +}\n>\n> Even if we use vacuum delay for this function, I think we need to set\n> VacuumDelayActive and return if it's false, or it's better to just\n> return if VacuumCostDelay == 0.\n\n\nGood point, I'll fix that.\n\n\n>\n> 4.\n> +static void\n> +check_all_relations(TupleDesc tupdesc, Tuplestorestate *tupstore,\n> + ForkNumber forknum)\n>\n> I also agree with Michael-san to remove this function. Instead we can\n> check all relations by:\n>\n> select pg_check_relation(oid) from pg_class;\n\n\nSure, but ideally we should do that in a client program (eg. pg_checksums)\nthat wouldn't maintain a transaction active for the whole execution.\n\n\n> 6.\n> Other typos\n>\n> s/dirted/dirtied/\n> s/explictly/explicitly/\n\n\nWill fix, thanks!\n\n\n",
"msg_date": "Mon, 16 Mar 2020 09:42:39 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 09:42:39AM +0100, Julien Rouhaud wrote:\n> On Mon, Mar 16, 2020 at 01:53:35PM +0900, Masahiko Sawada wrote:\n> >\n> > In addition to comments from Michael-san, here are my comments:\n\nThanks both for the reviews. I'm attaching a v3 with all comments addressed,\nexcept:\n\n> It seems to me that this test would be a good fit for\n> src/test/modules/test_misc/.\n\n\nAFAICT this is explicitly documented as tests for various extensions, and for\nnow it's a core function, so I didn't move it.\n\n\n> +Run\n> + make check\n> +or\n> + make installcheck\n> Why is installcheck mentioned here?\n\n\nThis is actually already used in multiple other test readme.",
"msg_date": "Mon, 16 Mar 2020 14:15:22 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 02:15:27PM +0100, Julien Rouhaud wrote:\n> On Mon, Mar 16, 2020 at 09:42:39AM +0100, Julien Rouhaud wrote:\n> > On Mon, Mar 16, 2020 at 01:53:35PM +0900, Masahiko Sawada wrote:\n> > >\n> > > In addition to comments from Michael-san, here are my comments:\n>\n> Thanks both for the reviews. I'm attaching a v3 with all comments addressed,\n> except:\n>\n> > It seems to me that this test would be a good fit for\n> > src/test/modules/test_misc/.\n>\n>\n> AFAICT this is explicitly documented as tests for various extensions, and for\n> now it's a core function, so I didn't move it.\n>\n>\n> > +Run\n> > + make check\n> > +or\n> > + make installcheck\n> > Why is installcheck mentioned here?\n>\n>\n> This is actually already used in multiple other test readme.\n\n\nSorry I forgot to update the regression tests. v4 attached.",
"msg_date": "Mon, 16 Mar 2020 15:29:32 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 09:21:22AM +0100, Julien Rouhaud wrote:\n> On Mon, Mar 16, 2020 at 12:29:28PM +0900, Michael Paquier wrote:\n>> With a large amount of\n>> shared buffer eviction you actually increase the risk of torn page\n>> reads. Instead of a logic relying on partition mapping locks, which\n>> could be unwise on performance grounds, did you consider different\n>> approaches? For example a kind of pre-emptive lock on the page in\n>> storage to prevent any shared buffer operation to happen while the\n>> block is read from storage, that would act like a barrier.\n> \n> Even with a workload having a large shared_buffers eviction pattern, I don't\n> think that there's a high probability of hitting a torn page. Unless I'm\n> mistaken it can only happen if all those steps happen concurrently to doing the\n> block read just after releasing the LWLock:\n> \n> - postgres read the same block in shared_buffers (including all the locking)\n> - dirties it\n> - writes part of the page\n> \n> It's certainly possible, but it seems so unlikely that the optimistic lock-less\n> approach seems like a very good tradeoff.\n\nHaving false reports in this area could be very confusing for the\nuser. That's for example possible now with checksum verification and\nbase backups.\n\n>> I guess that this leads to the fact that this function may be better as\n>> a contrib module, with the addition of some better-suited APIs in core\n>> (see paragraph above).\n> \n> Below?\n\nAbove. This thought more precisely:\n>> For example a kind of pre-emptive lock on the page in\n>> storage to prevent any shared buffer operation to happen while the\n>> block is read from storage, that would act like a barrier.\n\n> For the record when I first tested that feature I did try to check dirty\n> blocks, and it seemed that dirty blocks of shared relation were sometimes\n> wrongly reported as corrupted. I didn't try to investigate more though.\n\nHmm. It would be good to look at that, correct verification of shared\nrelations matter.\n\n>> + * - if a block is dirty in shared_buffers, it's ignored as it'll be flushed to\n>> + * disk either before the end of the next checkpoint or during recovery in\n>> + * case of unsafe shutdown\n>> Not sure that the indentation is going to react well on that part of\n>> the patch, perhaps it would be better to add some \"/*-------\" at the\n>> beginning and end of the comment block to tell pgindent to ignore this\n>> part?\n> \n> Ok. Although I think only the beginning comment is needed?\n\nFrom src/tools/pgindent/README:\n\"pgindent will reflow any comment block that's not at the left margin.\nIf this messes up manual formatting that ought to be preserved,\nprotect the comment block with some dashes:\"\n\n /*----------\n\t * Text here will not be touched by pgindent.\n *----------\n */\n\n>> Based on the feedback gathered on this thread, I guess that you should\n>> have a SRF returning the list of broken blocks, as well as NOTICE\n>> messages.\n> \n> The current patch has an SRF and a WARNING message, do you want an additional\n> NOTICE message or downgrade the existing one?\n\nRight, not sure which one is better, for zero_damaged_pages a WARNING\nis used.\n--\nMichael",
"msg_date": "Wed, 18 Mar 2020 13:20:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Mar 18, 2020 at 01:20:47PM +0900, Michael Paquier wrote:\n> On Mon, Mar 16, 2020 at 09:21:22AM +0100, Julien Rouhaud wrote:\n> > On Mon, Mar 16, 2020 at 12:29:28PM +0900, Michael Paquier wrote:\n> >> With a large amount of\n> >> shared buffer eviction you actually increase the risk of torn page\n> >> reads. Instead of a logic relying on partition mapping locks, which\n> >> could be unwise on performance grounds, did you consider different\n> >> approaches? For example a kind of pre-emptive lock on the page in\n> >> storage to prevent any shared buffer operation to happen while the\n> >> block is read from storage, that would act like a barrier.\n> >\n> > Even with a workload having a large shared_buffers eviction pattern, I don't\n> > think that there's a high probability of hitting a torn page. Unless I'm\n> > mistaken it can only happen if all those steps happen concurrently to doing the\n> > block read just after releasing the LWLock:\n> >\n> > - postgres read the same block in shared_buffers (including all the locking)\n> > - dirties it\n> > - writes part of the page\n> >\n> > It's certainly possible, but it seems so unlikely that the optimistic lock-less\n> > approach seems like a very good tradeoff.\n>\n> Having false reports in this area could be very confusing for the\n> user. That's for example possible now with checksum verification and\n> base backups.\n\n\nI agree, however this shouldn't be the case here, as the block will be\nrechecked while holding proper lock the 2nd time in case of possible false\npositive before being reported as corrupted. So the only downside is to check\ntwice a corrupted block that's not found in shared buffers (or concurrently\nloaded/modified/half flushed). As the number of corrupted or concurrently\nloaded/modified/half flushed blocks should usually be close to zero, it seems\nworthwhile to have a lockless check first for performance reason.\n\n\n> > For the record when I first tested that feature I did try to check dirty\n> > blocks, and it seemed that dirty blocks of shared relation were sometimes\n> > wrongly reported as corrupted. I didn't try to investigate more though.\n>\n> Hmm. It would be good to look at that, correct verification of shared\n> relations matter.\n\n\nI'll try to investigate, but non-dirty shared relation blocks can be checked\nand work as intended.\n\n\n>\n> >> + * - if a block is dirty in shared_buffers, it's ignored as it'll be flushed to\n> >> + * disk either before the end of the next checkpoint or during recovery in\n> >> + * case of unsafe shutdown\n> >> Not sure that the indentation is going to react well on that part of\n> >> the patch, perhaps it would be better to add some \"/*-------\" at the\n> >> beginning and end of the comment block to tell pgindent to ignore this\n> >> part?\n> >\n> > Ok. Although I think only the beginning comment is needed?\n>\n> From src/tools/pgindent/README:\n> \"pgindent will reflow any comment block that's not at the left margin.\n> If this messes up manual formatting that ought to be preserved,\n> protect the comment block with some dashes:\"\n>\n> /*----------\n> \t * Text here will not be touched by pgindent.\n> *----------\n> */\n\n\nFor instance the block comment in gen_partprune_steps_internal() disagrees.\nAnyway I added both as all the nearby codes does that for overall function\ncomments.\n\n\n",
"msg_date": "Wed, 18 Mar 2020 07:06:19 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Mar 18, 2020 at 01:20:47PM +0900, Michael Paquier wrote:\n> On Mon, Mar 16, 2020 at 09:21:22AM +0100, Julien Rouhaud wrote:\n> > On Mon, Mar 16, 2020 at 12:29:28PM +0900, Michael Paquier wrote:\n> >> Based on the feedback gathered on this thread, I guess that you should\n> >> have a SRF returning the list of broken blocks, as well as NOTICE\n> >> messages.\n> >\n> > The current patch has an SRF and a WARNING message, do you want an additional\n> > NOTICE message or downgrade the existing one?\n>\n> Right, not sure which one is better, for zero_damaged_pages a WARNING\n> is used.\n\n\nSorry forgot to answer that. IMHO a WARNING is better here, as we're talking\nabout data corruption. Also, a WARNING will be reported to both the client and\nserver logs, which sounds like a good thing.\n\n\n",
"msg_date": "Wed, 18 Mar 2020 07:13:12 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Mar 18, 2020 at 07:06:19AM +0100, Julien Rouhaud wrote:\n> On Wed, Mar 18, 2020 at 01:20:47PM +0900, Michael Paquier wrote:\n> > On Mon, Mar 16, 2020 at 09:21:22AM +0100, Julien Rouhaud wrote:\n> > > On Mon, Mar 16, 2020 at 12:29:28PM +0900, Michael Paquier wrote:\n> > >> With a large amount of\n> > >> shared buffer eviction you actually increase the risk of torn page\n> > >> reads. Instead of a logic relying on partition mapping locks, which\n> > >> could be unwise on performance grounds, did you consider different\n> > >> approaches? For example a kind of pre-emptive lock on the page in\n> > >> storage to prevent any shared buffer operation to happen while the\n> > >> block is read from storage, that would act like a barrier.\n> > >\n> > > Even with a workload having a large shared_buffers eviction pattern, I don't\n> > > think that there's a high probability of hitting a torn page. Unless I'm\n> > > mistaken it can only happen if all those steps happen concurrently to doing the\n> > > block read just after releasing the LWLock:\n> > >\n> > > - postgres read the same block in shared_buffers (including all the locking)\n> > > - dirties it\n> > > - writes part of the page\n> > >\n> > > It's certainly possible, but it seems so unlikely that the optimistic lock-less\n> > > approach seems like a very good tradeoff.\n> >\n> > Having false reports in this area could be very confusing for the\n> > user. That's for example possible now with checksum verification and\n> > base backups.\n>\n>\n> I agree, however this shouldn't be the case here, as the block will be\n> rechecked while holding proper lock the 2nd time in case of possible false\n> positive before being reported as corrupted. So the only downside is to check\n> twice a corrupted block that's not found in shared buffers (or concurrently\n> loaded/modified/half flushed). As the number of corrupted or concurrently\n> loaded/modified/half flushed blocks should usually be close to zero, it seems\n> worthwhile to have a lockless check first for performance reason.\n\n\nI just noticed some dumb mistakes while adding the new GUCs. v5 attached to\nfix that, no other changes.",
"msg_date": "Wed, 18 Mar 2020 11:10:55 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, 18 Mar 2020 at 19:11, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Mar 18, 2020 at 07:06:19AM +0100, Julien Rouhaud wrote:\n> > On Wed, Mar 18, 2020 at 01:20:47PM +0900, Michael Paquier wrote:\n> > > On Mon, Mar 16, 2020 at 09:21:22AM +0100, Julien Rouhaud wrote:\n> > > > On Mon, Mar 16, 2020 at 12:29:28PM +0900, Michael Paquier wrote:\n> > > >> With a large amount of\n> > > >> shared buffer eviction you actually increase the risk of torn page\n> > > >> reads. Instead of a logic relying on partition mapping locks, which\n> > > >> could be unwise on performance grounds, did you consider different\n> > > >> approaches? For example a kind of pre-emptive lock on the page in\n> > > >> storage to prevent any shared buffer operation to happen while the\n> > > >> block is read from storage, that would act like a barrier.\n> > > >\n> > > > Even with a workload having a large shared_buffers eviction pattern, I don't\n> > > > think that there's a high probability of hitting a torn page. Unless I'm\n> > > > mistaken it can only happen if all those steps happen concurrently to doing the\n> > > > block read just after releasing the LWLock:\n> > > >\n> > > > - postgres read the same block in shared_buffers (including all the locking)\n> > > > - dirties it\n> > > > - writes part of the page\n> > > >\n> > > > It's certainly possible, but it seems so unlikely that the optimistic lock-less\n> > > > approach seems like a very good tradeoff.\n> > >\n> > > Having false reports in this area could be very confusing for the\n> > > user. That's for example possible now with checksum verification and\n> > > base backups.\n> >\n> >\n> > I agree, however this shouldn't be the case here, as the block will be\n> > rechecked while holding proper lock the 2nd time in case of possible false\n> > positive before being reported as corrupted. So the only downside is to check\n> > twice a corrupted block that's not found in shared buffers (or concurrently\n> > loaded/modified/half flushed). As the number of corrupted or concurrently\n> > loaded/modified/half flushed blocks should usually be close to zero, it seems\n> > worthwhile to have a lockless check first for performance reason.\n>\n>\n> I just noticed some dumb mistakes while adding the new GUCs. v5 attached to\n> fix that, no other changes.\n\nThank you for updating the patch. I have some comments:\n\n1.\n+ <entry>\n+ <literal><function>pg_check_relation(<parameter>relation</parameter>\n<type>oid</type>, <parameter>fork</parameter>\n<type>text</type>)</function></literal>\n+ </entry>\n\nLooking at the declaration of pg_check_relation, 'relation' and 'fork'\nare optional arguments. So I think the above is not correct. But as I\ncommented below, 'relation' should not be optional, so maybe the above\nline could be:\n\n+ <literal><function>pg_check_relation(<parameter>relation</parameter>\n<type>oid</type>[, <parameter>fork</parameter>\n<type>text</type>])</function></literal>\n\n2.\n+ <indexterm>\n+ <primary>pg_check_relation</primary>\n+ </indexterm>\n+ <para>\n+ <function>pg_check_relation</function> iterates over all the blocks of all\n+ or the specified fork of a given relation and verify their checksum. It\n+ returns the list of blocks for which the found checksum doesn't match the\n+ expected one. You must be a member of the\n+ <literal>pg_read_all_stats</literal> role to use this function. It can\n+ only be used if data checksums are enabled. See <xref\n+ linkend=\"app-initdb-data-checksums\"/> for more information.\n+ </para>\n\n* I think we need a description about possible values for 'fork'\n(i.g., 'main', 'vm', 'fsm' and 'init'), and the behavior when 'fork'\nis omitted.\n\n* Do we need to explain about checksum cost-based delay here?\n\n3.\n+CREATE OR REPLACE FUNCTION pg_check_relation(\n+ IN relation regclass DEFAULT NULL::regclass, IN fork text DEFAULT NULL::text,\n+ OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n+ OUT expected_checksum integer, OUT found_checksum integer)\n+ RETURNS SETOF record VOLATILE LANGUAGE internal AS 'pg_check_relation'\n+ PARALLEL RESTRICTED;\n\nNow that pg_check_relation doesn't accept NULL as 'relation', I think\nwe need to make 'relation' a mandatory argument.\n\n4.\n+ /* Check if the relation (still) exists */\n+ if (SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n+ {\n+ /*\n+ * We use a standard relation_open() to acquire the initial lock. It\n+ * means that this will block until the lock is acquired, or will\n+ * raise an ERROR if lock_timeout has been set. If caller wants to\n+ * check multiple tables while relying on a maximum wait time, it\n+ * should process tables one by one instead of relying on a global\n+ * processing with the main SRF.\n+ */\n+ relation = relation_open(relid, AccessShareLock);\n+ }\n\nIIUC the above was necessary because we used to have\ncheck_all_relations() which iterates all relations on the database to\ndo checksum checks. But now that we don't have that function and\npg_check_relation processes one relation. Can we just do\nrelation_open() here?\n\n5.\nI think we need to check if the relation is a temp relation. I'm not\nsure it's worth to check checksums of temp relations but at least we\nneed not to check other's temp relations.\n\n6.\n+/*\n+ * Safely read the wanted buffer from disk, dealing with possible concurrency\n+ * issue. Note that if a buffer is found dirty in shared_buffers, no read will\n+ * be performed and the caller will be informed that no check should be done.\n+ * We can safely ignore such buffers as they'll be written before next\n+ * checkpoint's completion..\n+ *\n+ * The following locks can be used in this function:\n+ *\n+ * - shared LWLock on the target buffer pool partition mapping.\n+ * - IOLock on the buffer\n+ *\n+ * The IOLock is taken when reading the buffer from disk if it exists in\n+ * shared_buffers, to avoid torn pages.\n+ *\n+ * If the buffer isn't in shared_buffers, it'll be read from disk without any\n+ * lock unless caller asked otherwise, setting needlock. In this case, the\n+ * read will be done while the buffer mapping partition LWLock is still being\n+ * held. Reading with this lock is to avoid the unlikely but possible case\n+ * that a buffer wasn't present in shared buffers when we checked but it then\n+ * alloc'ed in shared_buffers, modified and flushed concurrently when we\n+ * later try to read it, leading to false positive due to torn page. Caller\n+ * can read first the buffer without holding the target buffer mapping\n+ * partition LWLock to have an optimistic approach, and reread the buffer\n+ * from disk in case of error.\n+ *\n+ * Caller should hold an AccessShareLock on the Relation\n+ */\n\nI think the above comment also needs some \"/*-------\" at the beginning and end.\n\n7.\n+static void\n+check_get_buffer(Relation relation, ForkNumber forknum,\n+ BlockNumber blkno, char *buffer, bool needlock, bool *checkit,\n+ bool *found_in_sb)\n+{\n\nMaybe we can make check_get_buffer() return a bool indicating we found\na buffer to check, instead of having '*checkit'. That way, we can\nsimplify the following code:\n\n+ check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n+ &checkit, &found_in_sb);\n+\n+ if (!checkit)\n+ continue;\n\nto something like:\n\n+ if (!check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n+ &found_in_sb))\n+ continue;\n\n8.\n+ if (PageIsVerified(buffer, blkno))\n+ {\n+ /*\n+ * If the page is really new, there won't by any checksum to be\n+ * computed or expected.\n+ */\n+ *chk_expected = *chk_found = NoComputedChecksum;\n+ return true;\n+ }\n+ else\n+ {\n+ /*\n+ * There's a corruption, but since this affect PageIsNew, we\n+ * can't compute a checksum, so set NoComputedChecksum for the\n+ * expected checksum.\n+ */\n+ *chk_expected = NoComputedChecksum;\n+ *chk_found = hdr->pd_checksum;\n+ }\n+ return false;\n\n* I think the 'else' is not necessary here.\n\n* Setting *chk_expected and *chk_found seems useless when we return\ntrue. The caller doesn't use them.\n\n* Should we forcibly overwrite ignore_checksum_failure to off in\npg_check_relation()? Otherwise, this logic seems not work fine.\n\n* I don't understand why we cannot compute a checksum in case where a\npage looks like a new page but is actually corrupted. Could you please\nelaborate on that?\n\n8.\n+ {\n+ {\"checksum_cost_page_hit\", PGC_USERSET, RESOURCES_CHECKSUM_DELAY,\n+ gettext_noop(\"Checksum cost for a page found in the buffer cache.\"),\n+ NULL\n+ },\n+ &ChecksumCostPageHit,\n+ 1, 0, 10000,\n+ NULL, NULL, NULL\n+ },\n\n* There is no description about the newly added four GUC parameters in the doc.\n\n* We need to put new GUC parameters into postgresql.conf.sample as well.\n\n* The patch doesn't use checksum_cost_page_hit at all.\n\n9.\ndiff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\nindex eb19644419..37f63e747c 100644\n--- a/src/backend/utils/init/globals.c\n+++ b/src/backend/utils/init/globals.c\n@@ -134,6 +134,14 @@ int max_worker_processes = 8;\n int max_parallel_workers = 8;\n int MaxBackends = 0;\n\n+int ChecksumCostPageHit = 1; /* GUC parameters for\nchecksum check */\n+int ChecksumCostPageMiss = 10;\n+int ChecksumCostLimit = 200;\n+double ChecksumCostDelay = 0;\n+\n+int ChecksumCostBalance = 0; /* working state for\nchecksums check */\n+bool ChecksumCostActive = false;\n\nCan we declare them in checksum.c since these parameters are used only\nin checksum.c and it does I/O my itself.\n\n10.\n+ /* Report the failure to the stat collector and the logs. */\n+ pgstat_report_checksum_failure();\n+ ereport(WARNING,\n+ (errcode(ERRCODE_DATA_CORRUPTED),\n+ errmsg(\"invalid page in block %u of relation %s\",\n+ blkno,\n+ relpath(relation->rd_smgr->smgr_rnode, forknum))));\n\nI think we could do pgstat_report_checksum_failure() and emit WARNING\ntwice for the same page since PageIsVerified() also does the same.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 28 Mar 2020 12:28:27 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 12:28:27PM +0900, Masahiko Sawada wrote:\n> On Wed, 18 Mar 2020 at 19:11, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > v5 attached\n> \n> Thank you for updating the patch. I have some comments:\n\nThanks a lot for the review!\n\n> 1.\n> + <entry>\n> + <literal><function>pg_check_relation(<parameter>relation</parameter>\n> <type>oid</type>, <parameter>fork</parameter>\n> <type>text</type>)</function></literal>\n> + </entry>\n> \n> Looking at the declaration of pg_check_relation, 'relation' and 'fork'\n> are optional arguments. So I think the above is not correct. But as I\n> commented below, 'relation' should not be optional, so maybe the above\n> line could be:\n> \n> + <literal><function>pg_check_relation(<parameter>relation</parameter>\n> <type>oid</type>[, <parameter>fork</parameter>\n> <type>text</type>])</function></literal>\n\nYes I missed that when making relation mandatory. Fixed.\n\n> 2.\n> + <indexterm>\n> + <primary>pg_check_relation</primary>\n> + </indexterm>\n> + <para>\n> + <function>pg_check_relation</function> iterates over all the blocks of all\n> + or the specified fork of a given relation and verify their checksum. It\n> + returns the list of blocks for which the found checksum doesn't match the\n> + expected one. You must be a member of the\n> + <literal>pg_read_all_stats</literal> role to use this function. It can\n> + only be used if data checksums are enabled. See <xref\n> + linkend=\"app-initdb-data-checksums\"/> for more information.\n> + </para>\n> \n> * I think we need a description about possible values for 'fork'\n> (i.g., 'main', 'vm', 'fsm' and 'init'), and the behavior when 'fork'\n> is omitted.\n\nDone.\n\n> * Do we need to explain about checksum cost-based delay here?\n\nIt's probably better in config.sgml, nearby vacuum cost-based delay, so done\nthis way with a link to reference that part.\n\n> 3.\n> +CREATE OR REPLACE FUNCTION pg_check_relation(\n> + IN relation regclass DEFAULT NULL::regclass, IN fork text DEFAULT NULL::text,\n> + OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n> + OUT expected_checksum integer, OUT found_checksum integer)\n> + RETURNS SETOF record VOLATILE LANGUAGE internal AS 'pg_check_relation'\n> + PARALLEL RESTRICTED;\n> \n> Now that pg_check_relation doesn't accept NULL as 'relation', I think\n> we need to make 'relation' a mandatory argument.\n\nCorrect, fixed.\n\n> 4.\n> + /* Check if the relation (still) exists */\n> + if (SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n> + {\n> + /*\n> + * We use a standard relation_open() to acquire the initial lock. It\n> + * means that this will block until the lock is acquired, or will\n> + * raise an ERROR if lock_timeout has been set. If caller wants to\n> + * check multiple tables while relying on a maximum wait time, it\n> + * should process tables one by one instead of relying on a global\n> + * processing with the main SRF.\n> + */\n> + relation = relation_open(relid, AccessShareLock);\n> + }\n> \n> IIUC the above was necessary because we used to have\n> check_all_relations() which iterates all relations on the database to\n> do checksum checks. But now that we don't have that function and\n> pg_check_relation processes one relation. Can we just do\n> relation_open() here?\n\nAh yes I missed that comment. I think only the comment needed to be updated to\nremove any part related to NULL-relation call. I ended up removign the whole\ncomment since locking and lock_timeout behavior is inherent to relation_open\nand there's no need to document that any further now that we always only check\none relation at a time.\n\n> 5.\n> I think we need to check if the relation is a temp relation. I'm not\n> sure it's worth to check checksums of temp relations but at least we\n> need not to check other's temp relations.\n\nGood point. I think it's still worthwhile to check the backend's temp\nrelation, although typical usage should be a bgworker/cron job doing that check\nso there shouldn't be any.\n\n> 6.\n> +/*\n> + * Safely read the wanted buffer from disk, dealing with possible concurrency\n> + * issue. Note that if a buffer is found dirty in shared_buffers, no read will\n> + * be performed and the caller will be informed that no check should be done.\n> + * We can safely ignore such buffers as they'll be written before next\n> + * checkpoint's completion..\n> [...] \n> + */\n> \n> I think the above comment also needs some \"/*-------\" at the beginning and end.\n\nFixed.\n\n> 7.\n> +static void\n> +check_get_buffer(Relation relation, ForkNumber forknum,\n> + BlockNumber blkno, char *buffer, bool needlock, bool *checkit,\n> + bool *found_in_sb)\n> +{\n> \n> Maybe we can make check_get_buffer() return a bool indicating we found\n> a buffer to check, instead of having '*checkit'. That way, we can\n> simplify the following code:\n> \n> + check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n> + &checkit, &found_in_sb);\n> +\n> + if (!checkit)\n> + continue;\n> \n> to something like:\n> \n> + if (!check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n> + &found_in_sb))\n> + continue;\n\nChanged.\n\n> 8.\n> + if (PageIsVerified(buffer, blkno))\n> + {\n> + /*\n> + * If the page is really new, there won't by any checksum to be\n> + * computed or expected.\n> + */\n> + *chk_expected = *chk_found = NoComputedChecksum;\n> + return true;\n> + }\n> + else\n> + {\n> + /*\n> + * There's a corruption, but since this affect PageIsNew, we\n> + * can't compute a checksum, so set NoComputedChecksum for the\n> + * expected checksum.\n> + */\n> + *chk_expected = NoComputedChecksum;\n> + *chk_found = hdr->pd_checksum;\n> + }\n> + return false;\n> \n> * I think the 'else' is not necessary here.\n\nAFAICT it's, see below.\n\n> * Setting *chk_expected and *chk_found seems useless when we return\n> true. The caller doesn't use them.\n\nIndeed, fixed.\n\n> * Should we forcibly overwrite ignore_checksum_failure to off in\n> pg_check_relation()? Otherwise, this logic seems not work fine.\n> \n> * I don't understand why we cannot compute a checksum in case where a\n> page looks like a new page but is actually corrupted. Could you please\n> elaborate on that?\n\nPageIsVerified has a different behavior depending on whether the page looks new\nor not. If the page looks like new, it only checks that it's indeed a new\npage, and otherwise try to verify the checksum.\n\nAlso, pg_check_page() has an assert to make sure that the page isn't (or don't\nlook like) new.\n\nSo it seems to me that the 'else' is required to properly detect a real or fake\nPageIsNew, and try to compute checksums only when required.\n\n> 8.\n> + {\n> + {\"checksum_cost_page_hit\", PGC_USERSET, RESOURCES_CHECKSUM_DELAY,\n> + gettext_noop(\"Checksum cost for a page found in the buffer cache.\"),\n> + NULL\n> + },\n> + &ChecksumCostPageHit,\n> + 1, 0, 10000,\n> + NULL, NULL, NULL\n> + },\n> \n> * There is no description about the newly added four GUC parameters in the doc.\n> \n> * We need to put new GUC parameters into postgresql.conf.sample as well.\n\nFixed both.\n\n> * The patch doesn't use checksum_cost_page_hit at all.\n\nIndeed, I also realized that while working on previous issues. I removed it\nand renamed checksum_cost_page_miss to checksum_cost_page.\n>\n> 9.\n> diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\n> index eb19644419..37f63e747c 100644\n> --- a/src/backend/utils/init/globals.c\n> +++ b/src/backend/utils/init/globals.c\n> @@ -134,6 +134,14 @@ int max_worker_processes = 8;\n> int max_parallel_workers = 8;\n> int MaxBackends = 0;\n> \n> +int ChecksumCostPageHit = 1; /* GUC parameters for\n> checksum check */\n> +int ChecksumCostPageMiss = 10;\n> +int ChecksumCostLimit = 200;\n> +double ChecksumCostDelay = 0;\n> +\n> +int ChecksumCostBalance = 0; /* working state for\n> checksums check */\n> +bool ChecksumCostActive = false;\n> \n> Can we declare them in checksum.c since these parameters are used only\n> in checksum.c and it does I/O my itself.\n\nThe GUC parameters would still need to be global, so for consistency I kept all\nthe variables in globals.c.\n>\n> 10.\n> + /* Report the failure to the stat collector and the logs. */\n> + pgstat_report_checksum_failure();\n> + ereport(WARNING,\n> + (errcode(ERRCODE_DATA_CORRUPTED),\n> + errmsg(\"invalid page in block %u of relation %s\",\n> + blkno,\n> + relpath(relation->rd_smgr->smgr_rnode, forknum))));\n> \n> I think we could do pgstat_report_checksum_failure() and emit WARNING\n> twice for the same page since PageIsVerified() also does the same.\n\nAs mentioned before, in this patch I only calls PageIsVerified() if the buffer\nlooks like new, and in this case PageIsVerified() only verify that it's a true\nall-zero-page, and won't try to verify the checksum, so there's no possibility\nof duplicated report. I modified the comments to document all the interactions\nand expectations.\n\nv6 attached.",
"msg_date": "Sat, 28 Mar 2020 13:18:58 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sat, 28 Mar 2020 at 21:19, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 12:28:27PM +0900, Masahiko Sawada wrote:\n> > On Wed, 18 Mar 2020 at 19:11, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > v5 attached\n> >\n> > Thank you for updating the patch. I have some comments:\n>\n> Thanks a lot for the review!\n\nThank you for updating the patch!\n\n> > 4.\n> > + /* Check if the relation (still) exists */\n> > + if (SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n> > + {\n> > + /*\n> > + * We use a standard relation_open() to acquire the initial lock. It\n> > + * means that this will block until the lock is acquired, or will\n> > + * raise an ERROR if lock_timeout has been set. If caller wants to\n> > + * check multiple tables while relying on a maximum wait time, it\n> > + * should process tables one by one instead of relying on a global\n> > + * processing with the main SRF.\n> > + */\n> > + relation = relation_open(relid, AccessShareLock);\n> > + }\n> >\n> > IIUC the above was necessary because we used to have\n> > check_all_relations() which iterates all relations on the database to\n> > do checksum checks. But now that we don't have that function and\n> > pg_check_relation processes one relation. Can we just do\n> > relation_open() here?\n>\n> Ah yes I missed that comment. I think only the comment needed to be updated to\n> remove any part related to NULL-relation call. I ended up removign the whole\n> comment since locking and lock_timeout behavior is inherent to relation_open\n> and there's no need to document that any further now that we always only check\n> one relation at a time.\n\nThe current patch still checks SearchSysCacheExists1() before\nrelation_open. Why do we need to call SearchSysCacheExists1() here? I\nthink if the given relation doesn't exist, relation_open() will raise\nan error \"could not open relation with OID %u\".\n\n+ /* Open the relation if it exists. */\n+ if (SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n+ {\n+ relation = relation_open(relid, AccessShareLock);\n+ }\n\n\n> > 8.\n> > + if (PageIsVerified(buffer, blkno))\n> > + {\n> > + /*\n> > + * If the page is really new, there won't by any checksum to be\n> > + * computed or expected.\n> > + */\n> > + *chk_expected = *chk_found = NoComputedChecksum;\n> > + return true;\n> > + }\n> > + else\n> > + {\n> > + /*\n> > + * There's a corruption, but since this affect PageIsNew, we\n> > + * can't compute a checksum, so set NoComputedChecksum for the\n> > + * expected checksum.\n> > + */\n> > + *chk_expected = NoComputedChecksum;\n> > + *chk_found = hdr->pd_checksum;\n> > + }\n> > + return false;\n> >\n> > * I think the 'else' is not necessary here.\n>\n> AFAICT it's, see below.\n>\n> > * Setting *chk_expected and *chk_found seems useless when we return\n> > true. The caller doesn't use them.\n>\n> Indeed, fixed.\n\nThe patch still sets values to both?\n\n+ if (PageIsVerified(buffer, blkno))\n+ {\n+ /* No corruption. */\n+ *chk_expected = *chk_found = NoComputedChecksum;\n+ return true;\n+ }\n\n>\n> > * Should we forcibly overwrite ignore_checksum_failure to off in\n> > pg_check_relation()? Otherwise, this logic seems not work fine.\n> >\n> > * I don't understand why we cannot compute a checksum in case where a\n> > page looks like a new page but is actually corrupted. Could you please\n> > elaborate on that?\n>\n> PageIsVerified has a different behavior depending on whether the page looks new\n> or not. If the page looks like new, it only checks that it's indeed a new\n> page, and otherwise try to verify the checksum.\n>\n> Also, pg_check_page() has an assert to make sure that the page isn't (or don't\n> look like) new.\n>\n> So it seems to me that the 'else' is required to properly detect a real or fake\n> PageIsNew, and try to compute checksums only when required.\n\nThank you for your explanation! I understand.\n\nI thought we can arrange the code to something like:\n\nif (PageIsNew(hdr))\n{\n if (PageIsVerified(hdr))\n {\n *chk_expected = *chk_found = NoComputedChecksum;\n return true;\n }\n\n *chk_expected = NoComputedChecksum;\n *chk_found = hdr->pd_checksum;\n return false;\n}\n\nBut since it's not a critical problem you can ignore it.\n\n>\n> > 8.\n> > + {\n> > + {\"checksum_cost_page_hit\", PGC_USERSET, RESOURCES_CHECKSUM_DELAY,\n> > + gettext_noop(\"Checksum cost for a page found in the buffer cache.\"),\n> > + NULL\n> > + },\n> > + &ChecksumCostPageHit,\n> > + 1, 0, 10000,\n> > + NULL, NULL, NULL\n> > + },\n> >\n> > * There is no description about the newly added four GUC parameters in the doc.\n> >\n> > * We need to put new GUC parameters into postgresql.conf.sample as well.\n>\n> Fixed both.\n>\n> > * The patch doesn't use checksum_cost_page_hit at all.\n>\n> Indeed, I also realized that while working on previous issues. I removed it\n> and renamed checksum_cost_page_miss to checksum_cost_page.\n\nPerhaps we can use checksum_cost_page_hit when we found the page in\nthe shared buffer but it's marked as dirty?\n\n> >\n> > 9.\n> > diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\n> > index eb19644419..37f63e747c 100644\n> > --- a/src/backend/utils/init/globals.c\n> > +++ b/src/backend/utils/init/globals.c\n> > @@ -134,6 +134,14 @@ int max_worker_processes = 8;\n> > int max_parallel_workers = 8;\n> > int MaxBackends = 0;\n> >\n> > +int ChecksumCostPageHit = 1; /* GUC parameters for\n> > checksum check */\n> > +int ChecksumCostPageMiss = 10;\n> > +int ChecksumCostLimit = 200;\n> > +double ChecksumCostDelay = 0;\n> > +\n> > +int ChecksumCostBalance = 0; /* working state for\n> > checksums check */\n> > +bool ChecksumCostActive = false;\n> >\n> > Can we declare them in checksum.c since these parameters are used only\n> > in checksum.c and it does I/O my itself.\n>\n> The GUC parameters would still need to be global, so for consistency I kept all\n> the variables in globals.c.\n\nOkay.\n\n> >\n> > 10.\n> > + /* Report the failure to the stat collector and the logs. */\n> > + pgstat_report_checksum_failure();\n> > + ereport(WARNING,\n> > + (errcode(ERRCODE_DATA_CORRUPTED),\n> > + errmsg(\"invalid page in block %u of relation %s\",\n> > + blkno,\n> > + relpath(relation->rd_smgr->smgr_rnode, forknum))));\n> >\n> > I think we could do pgstat_report_checksum_failure() and emit WARNING\n> > twice for the same page since PageIsVerified() also does the same.\n>\n> As mentioned before, in this patch I only calls PageIsVerified() if the buffer\n> looks like new, and in this case PageIsVerified() only verify that it's a true\n> all-zero-page, and won't try to verify the checksum, so there's no possibility\n> of duplicated report. I modified the comments to document all the interactions\n> and expectations.\n\nYou're right. Thank you for the explanation!\n\nI've read the latest patch and here is random comments:\n\n1.\n+ /*\n+ * Add a page miss cost, as we're always reading outside the shared\n+ * buffers.\n+ */\n+ /* Add a page cost. */\n+ ChecksumCostBalance += ChecksumCostPage;\n\nThere are duplicate comments.\n\n2.\n+ /* Dirty pages are ignored as they'll be flushed soon. */\n+ if (buf_state & BM_DIRTY)\n+ checkit = false;\n\nShould we check the buffer if it has BM_TAG_VALID as well here? I\nthought there might be a possibility that BufTableLookup() returns a\nbuf_Id but its buffer tag is not valid for example when the previous\nread failed after inserting the buffer tag to the buffer table.\n\n3.\n+ /* Add a page cost. */\n+ ChecksumCostBalance += ChecksumCostPage;\n+\n+ return checkit;\n+}\n\nThe check_get_buffer() seems to be slightly complex to me but when we\nreached the end of this function we always return true. Similarly, in\nthe case where we read the block while holding a partition lock we\nalways return true as well. Is my understanding right? If so, it might\nbe better to put some assertions.\n\n4.\n@@ -10825,6 +10825,14 @@\n proallargtypes => '{oid,text,int8,timestamptz}', proargmodes => '{i,o,o,o}',\n proargnames => '{tablespace,name,size,modification}',\n prosrc => 'pg_ls_tmpdir_1arg' },\n+{ oid => '9147', descr => 'check data integrity for one or all relations',\n+ proname => 'pg_check_relation', proisstrict => 'f', procost => '10000',\n+ prorows => '20', proretset => 't', proparallel => 'r',\n+ provolatile => 'v', prorettype => 'record', proargtypes => 'regclass text',\n+ proallargtypes => '{regclass,text,oid,int4,int8,int4,int4}',\n+ proargmodes => '{i,i,o,o,o,o,o}',\n+ proargnames =>\n'{relation,fork,relid,forknum,failed_blocknum,expected_checksum,found_checksum}',\n+ prosrc => 'pg_check_relation' },\n\nWhy is the pg_check_relation() is not a strict function? I think\nprostrict can be 'true' for this function and we can drop checking if\nthe first argument is NULL.\n\n5.\n+ memset(values, 0, sizeof(values));\n+ memset(nulls, 0, sizeof(nulls));\n\nI think we can do memset right before setting values to them, that is,\nafter checking (!found_in_sb && !force_lock).\n\n6.\n+static bool\n+check_buffer(char *buffer, uint32 blkno, uint16 *chk_expected,\n+ uint16 *chk_found)\n+{\n+ PageHeader hdr = (PageHeader) buffer;\n+\n+ Assert(chk_expected && chk_found);\n+\n+ if (PageIsNew(hdr))\n+ {\n+ /*\n+ * Check if the page is really new or if there's a corruption that\n+ * affected PageIsNew detection. Note that PageIsVerified won't try to\n+ * detect checksum corruption in this case, so there's no risk of\n+ * duplicated corruption report.\n+ */\n+ if (PageIsVerified(buffer, blkno))\n\nHow about using Page instead of PageHeader? Looking at other codes,\nISTM we usually pass Page to both PageIsNew() and PageIsVerified().\n\n7.\n+ <entry>\n+ <literal><function>pg_check_relation(<parameter>relation</parameter>\n<type>oid</type>[, <parameter>fork</parameter>\n<type>text</type>])</function></literal>.\n+ </entry>\n\n+{ oid => '9147', descr => 'check data integrity for one or all relations',\n+ proname => 'pg_check_relation', proisstrict => 'f', procost => '10000',\n+ prorows => '20', proretset => 't', proparallel => 'r',\n+ provolatile => 'v', prorettype => 'record', proargtypes => 'regclass text',\n+ proallargtypes => '{regclass,text,oid,int4,int8,int4,int4}',\n+ proargmodes => '{i,i,o,o,o,o,o}',\n+ proargnames =>\n'{relation,fork,relid,forknum,failed_blocknum,expected_checksum,found_checksum}',\n+ prosrc => 'pg_check_relation' },\n\nThe function argument data types don't match in the doc and function\ndeclaretion. relation is 'oid' in the doc but is 'regclass' in the\nfunction declaretion.\n\n8.\n+#define SRF_COLS 5 /* Number of output arguments in the SRF */\n\nLooking at similar built-in functions that return set of records they\nuse a more specific name for the number of returned columns such as\nPG_STAT_GET_WAL_SENDERS_COLS and PG_GET_SHMEM_SIZES_COLS. How about\nPG_CHECK_RELATION_COLS?\n\ncheck_relation_fork() seems to quite depends on pg_check_relation()\nbecause the returned tuplestore is specified by pg_check_relation().\nIt's just an idea but to improve reusability, how about moving\ncheck_relation_fork() to checksumfunc.c? That is, in checksumfuncs.c\nwhile iterating all blocks we call a new function in checksum.c, say\ncheck_one_block() function, which has the following part and is\nresponsible for getting, checking the specified block and returning a\nboolean indicating whether the block has corruption or not, along with\nchk_found and chk_expected:\n\n /*\n * To avoid too much overhead, the buffer will be first read without\n * the locks that would guarantee the lack of false positive, as such\n * events should be quite rare.\n */\nRetry:\n if (!check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n &found_in_sb))\n continue;\n\n if (check_buffer(buffer, blkno, &chk_expected, &chk_found))\n continue;\n\n /*\n * If we get a failure and the buffer wasn't found in shared buffers,\n * reread the buffer with suitable lock to avoid false positive. See\n * check_get_buffer for more details.\n */\n if (!found_in_sb && !force_lock)\n {\n force_lock = true;\n goto Retry;\n }\n\nA new function in checksumfuncs.c or pg_check_relation will be\nresponsible for storing the result to the tuplestore. That way,\ncheck_one_block() will be useful for other use when we want to check\nif the particular block has corruption with low overhead.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Apr 2020 12:24:50 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Apr 03, 2020 at 12:24:50PM +0900, Masahiko Sawada wrote:\n> On Sat, 28 Mar 2020 at 21:19, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> The current patch still checks SearchSysCacheExists1() before\n> relation_open. Why do we need to call SearchSysCacheExists1() here? I\n> think if the given relation doesn't exist, relation_open() will raise\n> an error \"could not open relation with OID %u\".\n> \n> + /* Open the relation if it exists. */\n> + if (SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n> + {\n> + relation = relation_open(relid, AccessShareLock);\n> + }\n\nOops yes sorry about that. Fixed.\n\n\n> > > 8.\n> > > + if (PageIsVerified(buffer, blkno))\n> > > + {\n> > > + /*\n> > > + * If the page is really new, there won't by any checksum to be\n> > > + * computed or expected.\n> > > + */\n> > > + *chk_expected = *chk_found = NoComputedChecksum;\n> > > + return true;\n> > > + }\n> > > + else\n> > > + {\n> > > + /*\n> > > + * There's a corruption, but since this affect PageIsNew, we\n> > > + * can't compute a checksum, so set NoComputedChecksum for the\n> > > + * expected checksum.\n> > > + */\n> > > + *chk_expected = NoComputedChecksum;\n> > > + *chk_found = hdr->pd_checksum;\n> > > + }\n> > > + return false;\n> > >\n> > > * I think the 'else' is not necessary here.\n> >\n> > AFAICT it's, see below.\n> >\n> > > * Setting *chk_expected and *chk_found seems useless when we return\n> > > true. The caller doesn't use them.\n> >\n> > Indeed, fixed.\n> \n> The patch still sets values to both?\n> \n> + if (PageIsVerified(buffer, blkno))\n> + {\n> + /* No corruption. */\n> + *chk_expected = *chk_found = NoComputedChecksum;\n> + return true;\n> + }\n\n\nSorry again, fixed.\n\n\n> > > * Should we forcibly overwrite ignore_checksum_failure to off in\n> > > pg_check_relation()? Otherwise, this logic seems not work fine.\n> > >\n> > > * I don't understand why we cannot compute a checksum in case where a\n> > > page looks like a new page but is actually corrupted. Could you please\n> > > elaborate on that?\n> >\n> > PageIsVerified has a different behavior depending on whether the page looks new\n> > or not. If the page looks like new, it only checks that it's indeed a new\n> > page, and otherwise try to verify the checksum.\n> >\n> > Also, pg_check_page() has an assert to make sure that the page isn't (or don't\n> > look like) new.\n> >\n> > So it seems to me that the 'else' is required to properly detect a real or fake\n> > PageIsNew, and try to compute checksums only when required.\n> \n> Thank you for your explanation! I understand.\n> \n> I thought we can arrange the code to something like:\n> \n> if (PageIsNew(hdr))\n> {\n> if (PageIsVerified(hdr))\n> {\n> *chk_expected = *chk_found = NoComputedChecksum;\n> return true;\n> }\n> \n> *chk_expected = NoComputedChecksum;\n> *chk_found = hdr->pd_checksum;\n> return false;\n> }\n> \n> But since it's not a critical problem you can ignore it.\n\n\nI like it, so done!\n\n\n> > > 8.\n> > > + {\n> > > + {\"checksum_cost_page_hit\", PGC_USERSET, RESOURCES_CHECKSUM_DELAY,\n> > > + gettext_noop(\"Checksum cost for a page found in the buffer cache.\"),\n> > > + NULL\n> > > + },\n> > > + &ChecksumCostPageHit,\n> > > + 1, 0, 10000,\n> > > + NULL, NULL, NULL\n> > > + },\n> > >\n> > > * There is no description about the newly added four GUC parameters in the doc.\n> > >\n> > > * We need to put new GUC parameters into postgresql.conf.sample as well.\n> >\n> > Fixed both.\n> >\n> > > * The patch doesn't use checksum_cost_page_hit at all.\n> >\n> > Indeed, I also realized that while working on previous issues. I removed it\n> > and renamed checksum_cost_page_miss to checksum_cost_page.\n> \n> Perhaps we can use checksum_cost_page_hit when we found the page in\n> the shared buffer but it's marked as dirty?\n\n\nThe thing is that when the buffer is dirty, we won't do any additional check,\nthus not adding any overhead. What may be needed here is to account for the\nlocking overhead (in all cases), so that if all (or almost all) the buffers are\ndirty and in shared buffers the execution can be throttled. I don't know how\nmuch an issue it can be, but if that's something to be fixes then page_hit\ndoesn't look like the right answer for that.\n\n\n> I've read the latest patch and here is random comments:\n> \n> 1.\n> + /*\n> + * Add a page miss cost, as we're always reading outside the shared\n> + * buffers.\n> + */\n> + /* Add a page cost. */\n> + ChecksumCostBalance += ChecksumCostPage;\n> \n> There are duplicate comments.\n\nFixed.\n\n\n> 2.\n> + /* Dirty pages are ignored as they'll be flushed soon. */\n> + if (buf_state & BM_DIRTY)\n> + checkit = false;\n> \n> Should we check the buffer if it has BM_TAG_VALID as well here? I\n> thought there might be a possibility that BufTableLookup() returns a\n> buf_Id but its buffer tag is not valid for example when the previous\n> read failed after inserting the buffer tag to the buffer table.\n\n\nGood point, fixed.\n\n\n> 3.\n> + /* Add a page cost. */\n> + ChecksumCostBalance += ChecksumCostPage;\n> +\n> + return checkit;\n> +}\n> \n> The check_get_buffer() seems to be slightly complex to me but when we\n> reached the end of this function we always return true. Similarly, in\n> the case where we read the block while holding a partition lock we\n> always return true as well. Is my understanding right? If so, it might\n> be better to put some assertions.\n\n\nYes it's a little bit complex. I used this approach to avoid the need to\nrelease the locks all over the place, but maybe this doesn't really improve\nthings. I added asserts and comments anyway as suggested, thanks.\n\n\n> 4.\n> @@ -10825,6 +10825,14 @@\n> proallargtypes => '{oid,text,int8,timestamptz}', proargmodes => '{i,o,o,o}',\n> proargnames => '{tablespace,name,size,modification}',\n> prosrc => 'pg_ls_tmpdir_1arg' },\n> +{ oid => '9147', descr => 'check data integrity for one or all relations',\n> + proname => 'pg_check_relation', proisstrict => 'f', procost => '10000',\n> + prorows => '20', proretset => 't', proparallel => 'r',\n> + provolatile => 'v', prorettype => 'record', proargtypes => 'regclass text',\n> + proallargtypes => '{regclass,text,oid,int4,int8,int4,int4}',\n> + proargmodes => '{i,i,o,o,o,o,o}',\n> + proargnames =>\n> '{relation,fork,relid,forknum,failed_blocknum,expected_checksum,found_checksum}',\n> + prosrc => 'pg_check_relation' },\n> \n> Why is the pg_check_relation() is not a strict function? I think\n> prostrict can be 'true' for this function and we can drop checking if\n> the first argument is NULL.\n\n\nThat's because the fork is still optional. While this could be made mandatory\nwithout much problems, I think we'll eventually want to add a way to check only\na subset of a fork, so it seemed to me that is wasn't worth changing that now.\n\n\n> 5.\n> + memset(values, 0, sizeof(values));\n> + memset(nulls, 0, sizeof(nulls));\n> \n> I think we can do memset right before setting values to them, that is,\n> after checking (!found_in_sb && !force_lock).\n\n\nIndeed, done!\n\n\n> 6.\n> +static bool\n> +check_buffer(char *buffer, uint32 blkno, uint16 *chk_expected,\n> + uint16 *chk_found)\n> +{\n> + PageHeader hdr = (PageHeader) buffer;\n> +\n> + Assert(chk_expected && chk_found);\n> +\n> + if (PageIsNew(hdr))\n> + {\n> + /*\n> + * Check if the page is really new or if there's a corruption that\n> + * affected PageIsNew detection. Note that PageIsVerified won't try to\n> + * detect checksum corruption in this case, so there's no risk of\n> + * duplicated corruption report.\n> + */\n> + if (PageIsVerified(buffer, blkno))\n> \n> How about using Page instead of PageHeader? Looking at other codes,\n> ISTM we usually pass Page to both PageIsNew() and PageIsVerified().\n\n\nAgreed, done.\n\n\n> 7.\n> + <entry>\n> + <literal><function>pg_check_relation(<parameter>relation</parameter>\n> <type>oid</type>[, <parameter>fork</parameter>\n> <type>text</type>])</function></literal>.\n> + </entry>\n> \n> +{ oid => '9147', descr => 'check data integrity for one or all relations',\n> + proname => 'pg_check_relation', proisstrict => 'f', procost => '10000',\n> + prorows => '20', proretset => 't', proparallel => 'r',\n> + provolatile => 'v', prorettype => 'record', proargtypes => 'regclass text',\n> + proallargtypes => '{regclass,text,oid,int4,int8,int4,int4}',\n> + proargmodes => '{i,i,o,o,o,o,o}',\n> + proargnames =>\n> '{relation,fork,relid,forknum,failed_blocknum,expected_checksum,found_checksum}',\n> + prosrc => 'pg_check_relation' },\n> \n> The function argument data types don't match in the doc and function\n> declaretion. relation is 'oid' in the doc but is 'regclass' in the\n> function declaretion.\n\n\nFixed.\n\n\n> 8.\n> +#define SRF_COLS 5 /* Number of output arguments in the SRF */\n> \n> Looking at similar built-in functions that return set of records they\n> use a more specific name for the number of returned columns such as\n> PG_STAT_GET_WAL_SENDERS_COLS and PG_GET_SHMEM_SIZES_COLS. How about\n> PG_CHECK_RELATION_COLS?\n> \n> check_relation_fork() seems to quite depends on pg_check_relation()\n> because the returned tuplestore is specified by pg_check_relation().\n> It's just an idea but to improve reusability, how about moving\n> check_relation_fork() to checksumfunc.c? That is, in checksumfuncs.c\n> while iterating all blocks we call a new function in checksum.c, say\n> check_one_block() function, which has the following part and is\n> responsible for getting, checking the specified block and returning a\n> boolean indicating whether the block has corruption or not, along with\n> chk_found and chk_expected:\n> \n> /*\n> * To avoid too much overhead, the buffer will be first read without\n> * the locks that would guarantee the lack of false positive, as such\n> * events should be quite rare.\n> */\n> Retry:\n> if (!check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n> &found_in_sb))\n> continue;\n> \n> if (check_buffer(buffer, blkno, &chk_expected, &chk_found))\n> continue;\n> \n> /*\n> * If we get a failure and the buffer wasn't found in shared buffers,\n> * reread the buffer with suitable lock to avoid false positive. See\n> * check_get_buffer for more details.\n> */\n> if (!found_in_sb && !force_lock)\n> {\n> force_lock = true;\n> goto Retry;\n> }\n> \n> A new function in checksumfuncs.c or pg_check_relation will be\n> responsible for storing the result to the tuplestore. That way,\n> check_one_block() will be useful for other use when we want to check\n> if the particular block has corruption with low overhead.\n\n\nYes, I agree that passing the tuplestore isn't an ideal approach and some\nrefactoring should probably happen. One thing is that this wouldn't be\n\"check_one_block()\" but \"check_one_block_on_disk()\" (which could also be from\nthe OS cache). I'm not sure how useful it's in itself. It also raises some\nconcerns about the throttling. I didn't change that for now, but I hope\nthere'll be some other feedback about it.",
"msg_date": "Fri, 3 Apr 2020 11:39:11 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Apr 03, 2020 at 11:39:11AM +0200, Julien Rouhaud wrote:\n> On Fri, Apr 03, 2020 at 12:24:50PM +0900, Masahiko Sawada wrote:\n> > \n> > check_relation_fork() seems to quite depends on pg_check_relation()\n> > because the returned tuplestore is specified by pg_check_relation().\n> > It's just an idea but to improve reusability, how about moving\n> > check_relation_fork() to checksumfunc.c? That is, in checksumfuncs.c\n> > while iterating all blocks we call a new function in checksum.c, say\n> > check_one_block() function, which has the following part and is\n> > responsible for getting, checking the specified block and returning a\n> > boolean indicating whether the block has corruption or not, along with\n> > chk_found and chk_expected:\n> > \n> > /*\n> > * To avoid too much overhead, the buffer will be first read without\n> > * the locks that would guarantee the lack of false positive, as such\n> > * events should be quite rare.\n> > */\n> > Retry:\n> > if (!check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n> > &found_in_sb))\n> > continue;\n> > \n> > if (check_buffer(buffer, blkno, &chk_expected, &chk_found))\n> > continue;\n> > \n> > /*\n> > * If we get a failure and the buffer wasn't found in shared buffers,\n> > * reread the buffer with suitable lock to avoid false positive. See\n> > * check_get_buffer for more details.\n> > */\n> > if (!found_in_sb && !force_lock)\n> > {\n> > force_lock = true;\n> > goto Retry;\n> > }\n> > \n> > A new function in checksumfuncs.c or pg_check_relation will be\n> > responsible for storing the result to the tuplestore. That way,\n> > check_one_block() will be useful for other use when we want to check\n> > if the particular block has corruption with low overhead.\n> \n> \n> Yes, I agree that passing the tuplestore isn't an ideal approach and some\n> refactoring should probably happen. One thing is that this wouldn't be\n> \"check_one_block()\" but \"check_one_block_on_disk()\" (which could also be from\n> the OS cache). I'm not sure how useful it's in itself. It also raises some\n> concerns about the throttling. I didn't change that for now, but I hope\n> there'll be some other feedback about it.\n> \n\n\nI had some time this morning, so I did the suggested refactoring as it seems\nlike a way cleaner interface. I also kept the suggested check_one_block().",
"msg_date": "Sat, 4 Apr 2020 11:04:28 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sat, 4 Apr 2020 at 18:04, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Apr 03, 2020 at 11:39:11AM +0200, Julien Rouhaud wrote:\n> > On Fri, Apr 03, 2020 at 12:24:50PM +0900, Masahiko Sawada wrote:\n> > >\n> > > check_relation_fork() seems to quite depends on pg_check_relation()\n> > > because the returned tuplestore is specified by pg_check_relation().\n> > > It's just an idea but to improve reusability, how about moving\n> > > check_relation_fork() to checksumfunc.c? That is, in checksumfuncs.c\n> > > while iterating all blocks we call a new function in checksum.c, say\n> > > check_one_block() function, which has the following part and is\n> > > responsible for getting, checking the specified block and returning a\n> > > boolean indicating whether the block has corruption or not, along with\n> > > chk_found and chk_expected:\n> > >\n> > > /*\n> > > * To avoid too much overhead, the buffer will be first read without\n> > > * the locks that would guarantee the lack of false positive, as such\n> > > * events should be quite rare.\n> > > */\n> > > Retry:\n> > > if (!check_get_buffer(relation, forknum, blkno, buffer, force_lock,\n> > > &found_in_sb))\n> > > continue;\n> > >\n> > > if (check_buffer(buffer, blkno, &chk_expected, &chk_found))\n> > > continue;\n> > >\n> > > /*\n> > > * If we get a failure and the buffer wasn't found in shared buffers,\n> > > * reread the buffer with suitable lock to avoid false positive. See\n> > > * check_get_buffer for more details.\n> > > */\n> > > if (!found_in_sb && !force_lock)\n> > > {\n> > > force_lock = true;\n> > > goto Retry;\n> > > }\n> > >\n> > > A new function in checksumfuncs.c or pg_check_relation will be\n> > > responsible for storing the result to the tuplestore. That way,\n> > > check_one_block() will be useful for other use when we want to check\n> > > if the particular block has corruption with low overhead.\n> >\n> >\n> > Yes, I agree that passing the tuplestore isn't an ideal approach and some\n> > refactoring should probably happen. One thing is that this wouldn't be\n> > \"check_one_block()\" but \"check_one_block_on_disk()\" (which could also be from\n> > the OS cache). I'm not sure how useful it's in itself. It also raises some\n> > concerns about the throttling. I didn't change that for now, but I hope\n> > there'll be some other feedback about it.\n> >\n>\n>\n> I had some time this morning, so I did the suggested refactoring as it seems\n> like a way cleaner interface. I also kept the suggested check_one_block().\n\nThank you for updating the patch! The patch looks good to me. Here are\nsome random comments mostly about cosmetic changes.\n\n1.\nI think we can have two separate SQL functions:\npg_check_relation(regclass, text) and pg_check_relation(regclass),\ninstead of setting NULL by default to the second argument.\n\n2.\n+ * Check data sanity for a specific block in the given fork of the given\n+ * relation, always retrieved locally with smgrred even if a version exists in\n\ns/smgrred/smgrread/\n\n3.\n+ /* The buffer will have to check checked. */\n+ Assert(checkit);\n\nShould it be \"The buffer will have to be checked\"?\n\n4.\n+ if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_SERVER_FILES))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"only superuser or a member of the\npg_read_server_files role may use this function\")));\n\nLooking at the definition of pg_stat_read_server_files role, this role\nseems to be for operations that could read non-database files such as\ncsv files. Therefore, currently this role is used by file_fdw and COPY\ncommand. I personally think pg_stat_scan_tables would be more\nappropriate for this function but I'm not sure.\n\n5.\n+ /* Set cost-based vacuum delay */\n+ ChecksumCostActive = (ChecksumCostDelay > 0);\n+ ChecksumCostBalance = 0;\n\ns/vacuum/checksum verification/\n\n6.\n+ ereport(WARNING,\n+ (errcode(ERRCODE_DATA_CORRUPTED),\n+ errmsg(\"invalid page in block %u of relation %s\",\n+ blkno,\n+ relpath(relation->rd_smgr->smgr_rnode, forknum))));\n\nI think it's better to show the relation name instead of the relation path here.\n\n7.\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"relation \\\"%s\\\" does not have storage to be checked\",\n+ quote_qualified_identifier(\n+ get_namespace_name(get_rel_namespace(relid)),\n+ get_rel_name(relid)))));\n\nLooking at other similar error messages we don't show qualified\nrelation name but the relation name gotten by\nRelationGetRelationName(relation). Can we do that here as well for\nconsistency?\n\n8.\n+ if (!(rsinfo->allowedModes & SFRM_Materialize))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"materialize mode required, but it is not \" \\\n+ \"allowed in this context\")));\n\nI think it's better to have this error message in one line for easy grepping.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Apr 2020 13:13:30 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, Apr 05, 2020 at 01:13:30PM +0900, Masahiko Sawada wrote:\n> \n> Thank you for updating the patch! The patch looks good to me. Here are\n> some random comments mostly about cosmetic changes.\n> \n\nThanks a lot for the review!\n\n> \n> 1.\n> I think we can have two separate SQL functions:\n> pg_check_relation(regclass, text) and pg_check_relation(regclass),\n> instead of setting NULL by default to the second argument.\n> \n\nI'm fine with it, so implemented this way with the required documentation\nchanges.\n\n> \n> 2.\n> + * Check data sanity for a specific block in the given fork of the given\n> + * relation, always retrieved locally with smgrred even if a version exists in\n> \n> s/smgrred/smgrread/\n\nFixed.\n\n> \n> 3.\n> + /* The buffer will have to check checked. */\n> + Assert(checkit);\n> \n> Should it be \"The buffer will have to be checked\"?\n> \n\nOops indeed, fixed.\n\n> \n> 4.\n> + if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_SERVER_FILES))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"only superuser or a member of the\n> pg_read_server_files role may use this function\")));\n> \n> Looking at the definition of pg_stat_read_server_files role, this role\n> seems to be for operations that could read non-database files such as\n> csv files. Therefore, currently this role is used by file_fdw and COPY\n> command. I personally think pg_stat_scan_tables would be more\n> appropriate for this function but I'm not sure.\n> \n\nThat's a very good point, especially since the documentation of this default\nrole is quite relevant for those functions:\n\n\"Execute monitoring functions that may take ACCESS SHARE locks on tables,\npotentially for a long time.\"\n\nSo changed!\n\n> \n> 5.\n> + /* Set cost-based vacuum delay */\n> + ChecksumCostActive = (ChecksumCostDelay > 0);\n> + ChecksumCostBalance = 0;\n> \n> s/vacuum/checksum verification/\n> \n\nFixed.\n\n> \n> 6.\n> + ereport(WARNING,\n> + (errcode(ERRCODE_DATA_CORRUPTED),\n> + errmsg(\"invalid page in block %u of relation %s\",\n> + blkno,\n> + relpath(relation->rd_smgr->smgr_rnode, forknum))));\n> \n> I think it's better to show the relation name instead of the relation path here.\n> \n\nI'm here using the same pattern as what ReadBuffer_common() would display if a\ncorrupted block is read. I think it's better to keep the format for both, so\nany existing log analyzer will keep working with those new functions.\n\n> \n> 7.\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"relation \\\"%s\\\" does not have storage to be checked\",\n> + quote_qualified_identifier(\n> + get_namespace_name(get_rel_namespace(relid)),\n> + get_rel_name(relid)))));\n> \n> Looking at other similar error messages we don't show qualified\n> relation name but the relation name gotten by\n> RelationGetRelationName(relation). Can we do that here as well for\n> consistency?\n> \n\nIndeed, fixed.\n\n> \n> 8.\n> + if (!(rsinfo->allowedModes & SFRM_Materialize))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"materialize mode required, but it is not \" \\\n> + \"allowed in this context\")));\n> \n> I think it's better to have this error message in one line for easy grepping.\n\nFixed.\n\nI also fixed missing leading tab in the perl TAP tests",
"msg_date": "Sun, 5 Apr 2020 10:43:55 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, 5 Apr 2020 at 17:44, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sun, Apr 05, 2020 at 01:13:30PM +0900, Masahiko Sawada wrote:\n> >\n> > Thank you for updating the patch! The patch looks good to me. Here are\n> > some random comments mostly about cosmetic changes.\n> >\n>\n> Thanks a lot for the review!\n\nThank you for updating the patch.\n\n>\n> >\n> > 1.\n> > I think we can have two separate SQL functions:\n> > pg_check_relation(regclass, text) and pg_check_relation(regclass),\n> > instead of setting NULL by default to the second argument.\n> >\n>\n> I'm fine with it, so implemented this way with the required documentation\n> changes.\n\nWhy do we need two rows in the doc? For instance, replication slot\nfunctions have some optional arguments but there is only one row in\nthe doc. So I think we don't need to change the doc from the previous\nversion patch.\n\nAnd I think these are not necessary as we already defined in\ninclude/catalog/pg_proc.dat:\n\n+CREATE OR REPLACE FUNCTION pg_check_relation(\n+ IN relation regclass,\n+ OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n+ OUT expected_checksum integer, OUT found_checksum integer)\n+ RETURNS SETOF record STRICT VOLATILE LANGUAGE internal AS 'pg_check_relation'\n+ PARALLEL RESTRICTED;\n+\n+CREATE OR REPLACE FUNCTION pg_check_relation(\n+ IN relation regclass, IN fork text,\n+ OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n+ OUT expected_checksum integer, OUT found_checksum integer)\n+ RETURNS SETOF record STRICT VOLATILE LANGUAGE internal\n+ AS 'pg_check_relation_fork'\n+ PARALLEL RESTRICTED;\n\n>\n> >\n> > 2.\n> > + * Check data sanity for a specific block in the given fork of the given\n> > + * relation, always retrieved locally with smgrred even if a version exists in\n> >\n> > s/smgrred/smgrread/\n>\n> Fixed.\n>\n> >\n> > 3.\n> > + /* The buffer will have to check checked. */\n> > + Assert(checkit);\n> >\n> > Should it be \"The buffer will have to be checked\"?\n> >\n>\n> Oops indeed, fixed.\n>\n> >\n> > 4.\n> > + if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_SERVER_FILES))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > + errmsg(\"only superuser or a member of the\n> > pg_read_server_files role may use this function\")));\n> >\n> > Looking at the definition of pg_stat_read_server_files role, this role\n> > seems to be for operations that could read non-database files such as\n> > csv files. Therefore, currently this role is used by file_fdw and COPY\n> > command. I personally think pg_stat_scan_tables would be more\n> > appropriate for this function but I'm not sure.\n> >\n>\n> That's a very good point, especially since the documentation of this default\n> role is quite relevant for those functions:\n>\n> \"Execute monitoring functions that may take ACCESS SHARE locks on tables,\n> potentially for a long time.\"\n>\n> So changed!\n>\n> >\n> > 5.\n> > + /* Set cost-based vacuum delay */\n> > + ChecksumCostActive = (ChecksumCostDelay > 0);\n> > + ChecksumCostBalance = 0;\n> >\n> > s/vacuum/checksum verification/\n> >\n>\n> Fixed.\n>\n> >\n> > 6.\n> > + ereport(WARNING,\n> > + (errcode(ERRCODE_DATA_CORRUPTED),\n> > + errmsg(\"invalid page in block %u of relation %s\",\n> > + blkno,\n> > + relpath(relation->rd_smgr->smgr_rnode, forknum))));\n> >\n> > I think it's better to show the relation name instead of the relation path here.\n> >\n>\n> I'm here using the same pattern as what ReadBuffer_common() would display if a\n> corrupted block is read. I think it's better to keep the format for both, so\n> any existing log analyzer will keep working with those new functions.\n\nOk, I agree with you.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Apr 2020 18:08:06 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, Apr 05, 2020 at 06:08:06PM +0900, Masahiko Sawada wrote:\n> \n> Why do we need two rows in the doc? For instance, replication slot\n> functions have some optional arguments but there is only one row in\n> the doc. So I think we don't need to change the doc from the previous\n> version patch.\n> \n\nI thought that if we document the function as pg_check_relation(regclass [,\nfork]) users could think that the 2nd argument is optional, so that\npg_check_relation('something', NULL) could be a valid alias for the 1-argument\nform, which it isn't. After checking, I see that e.g. current_setting has the\nsame semantics and is documented the way you suggest, so fixed back to previous\nversion.\n\n> And I think these are not necessary as we already defined in\n> include/catalog/pg_proc.dat:\n> \n> +CREATE OR REPLACE FUNCTION pg_check_relation(\n> + IN relation regclass,\n> + OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n> + OUT expected_checksum integer, OUT found_checksum integer)\n> + RETURNS SETOF record STRICT VOLATILE LANGUAGE internal AS 'pg_check_relation'\n> + PARALLEL RESTRICTED;\n> +\n> +CREATE OR REPLACE FUNCTION pg_check_relation(\n> + IN relation regclass, IN fork text,\n> + OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n> + OUT expected_checksum integer, OUT found_checksum integer)\n> + RETURNS SETOF record STRICT VOLATILE LANGUAGE internal\n> + AS 'pg_check_relation_fork'\n> + PARALLEL RESTRICTED;\n> \n\nOh right this isn't required since there's no default value anymore, fixed.\n\nv9 attached.",
"msg_date": "Sun, 5 Apr 2020 11:44:59 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, 5 Apr 2020 at 18:45, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sun, Apr 05, 2020 at 06:08:06PM +0900, Masahiko Sawada wrote:\n> >\n> > Why do we need two rows in the doc? For instance, replication slot\n> > functions have some optional arguments but there is only one row in\n> > the doc. So I think we don't need to change the doc from the previous\n> > version patch.\n> >\n>\n> I thought that if we document the function as pg_check_relation(regclass [,\n> fork]) users could think that the 2nd argument is optional, so that\n> pg_check_relation('something', NULL) could be a valid alias for the 1-argument\n> form, which it isn't. After checking, I see that e.g. current_setting has the\n> same semantics and is documented the way you suggest, so fixed back to previous\n> version.\n>\n> > And I think these are not necessary as we already defined in\n> > include/catalog/pg_proc.dat:\n> >\n> > +CREATE OR REPLACE FUNCTION pg_check_relation(\n> > + IN relation regclass,\n> > + OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n> > + OUT expected_checksum integer, OUT found_checksum integer)\n> > + RETURNS SETOF record STRICT VOLATILE LANGUAGE internal AS 'pg_check_relation'\n> > + PARALLEL RESTRICTED;\n> > +\n> > +CREATE OR REPLACE FUNCTION pg_check_relation(\n> > + IN relation regclass, IN fork text,\n> > + OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n> > + OUT expected_checksum integer, OUT found_checksum integer)\n> > + RETURNS SETOF record STRICT VOLATILE LANGUAGE internal\n> > + AS 'pg_check_relation_fork'\n> > + PARALLEL RESTRICTED;\n> >\n>\n> Oh right this isn't required since there's no default value anymore, fixed.\n>\n> v9 attached.\n\nThank you for updating the patch! The patch looks good to me.\n\nI've marked this patch as Ready for Committer. I hope this patch will\nget committed to PG13.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Apr 2020 20:01:36 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, Apr 05, 2020 at 08:01:36PM +0900, Masahiko Sawada wrote:\n> On Sun, 5 Apr 2020 at 18:45, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sun, Apr 05, 2020 at 06:08:06PM +0900, Masahiko Sawada wrote:\n> > >\n> > > Why do we need two rows in the doc? For instance, replication slot\n> > > functions have some optional arguments but there is only one row in\n> > > the doc. So I think we don't need to change the doc from the previous\n> > > version patch.\n> > >\n> >\n> > I thought that if we document the function as pg_check_relation(regclass [,\n> > fork]) users could think that the 2nd argument is optional, so that\n> > pg_check_relation('something', NULL) could be a valid alias for the 1-argument\n> > form, which it isn't. After checking, I see that e.g. current_setting has the\n> > same semantics and is documented the way you suggest, so fixed back to previous\n> > version.\n> >\n> > > And I think these are not necessary as we already defined in\n> > > include/catalog/pg_proc.dat:\n> > >\n> > > +CREATE OR REPLACE FUNCTION pg_check_relation(\n> > > + IN relation regclass,\n> > > + OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n> > > + OUT expected_checksum integer, OUT found_checksum integer)\n> > > + RETURNS SETOF record STRICT VOLATILE LANGUAGE internal AS 'pg_check_relation'\n> > > + PARALLEL RESTRICTED;\n> > > +\n> > > +CREATE OR REPLACE FUNCTION pg_check_relation(\n> > > + IN relation regclass, IN fork text,\n> > > + OUT relid oid, OUT forknum integer, OUT failed_blocknum bigint,\n> > > + OUT expected_checksum integer, OUT found_checksum integer)\n> > > + RETURNS SETOF record STRICT VOLATILE LANGUAGE internal\n> > > + AS 'pg_check_relation_fork'\n> > > + PARALLEL RESTRICTED;\n> > >\n> >\n> > Oh right this isn't required since there's no default value anymore, fixed.\n> >\n> > v9 attached.\n> \n> Thank you for updating the patch! The patch looks good to me.\n> \n> I've marked this patch as Ready for Committer. I hope this patch will\n> get committed to PG13.\n> \nThanks a lot!\n\n\n",
"msg_date": "Sun, 5 Apr 2020 13:17:22 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "> On 5 Apr 2020, at 13:17, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Sun, Apr 05, 2020 at 08:01:36PM +0900, Masahiko Sawada wrote:\n\n>> Thank you for updating the patch! The patch looks good to me.\n>> \n>> I've marked this patch as Ready for Committer. I hope this patch will\n>> get committed to PG13.\n\n> Thanks a lot!\n\nThis patch has been through quite thorough review, and skimming the thread all\nconcerns raised have been addressed. It still applies and tests gree in the CF\nPatchtester. The feature in itself certainly gets my +1 for inclusion, it\nseems a good addition.\n\nIs any committer who has taken part in the thread (or anyone else for that\nmatter) interested in seeing this to some form of closure in this CF?\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 5 Jul 2020 23:00:30 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Small language fixes in comments and user-facing docs.\n\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 88efb38556..39596db193 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -26162,7 +26162,7 @@ SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8');\n \n <para>\n The functions shown in <xref linkend=\"functions-data-sanity-table\"/>\n- provide means to check for health of data file in a cluster.\n+ provide a means to check for health of a data file in a cluster.\n </para>\n \n <table id=\"functions-data-sanity-table\">\n@@ -26179,8 +26179,8 @@ SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8');\n <literal><function>pg_check_relation(<parameter>relation</parameter> <type>regclass</type> [, <parameter>fork</parameter> <type>text</type>])</function></literal>\n </entry>\n <entry><type>setof record</type></entry>\n- <entry>Validate the checksums for all blocks of all or the given fork of\n- a given relation.</entry>\n+ <entry>Validate the checksum for all blocks of a relation.\n+ </entry>\n </row>\n </tbody>\n </tgroup>\n@@ -26190,15 +26190,15 @@ SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8');\n <primary>pg_check_relation</primary>\n </indexterm>\n <para id=\"functions-check-relation-note\" xreflabel=\"pg_check_relation\">\n- <function>pg_check_relation</function> iterates over all the blocks of a\n- given relation and verify their checksum. If provided,\n- <replaceable>fork</replaceable> should be <literal>'main'</literal> for the\n+ <function>pg_check_relation</function> iterates over all blocks of a\n+ given relation and verifies their checksums. If passed,\n+ <replaceable>fork</replaceable> specifies that only checksums of the given\n+ fork are to be verified. Fork should be <literal>'main'</literal> for the\n main data fork, <literal>'fsm'</literal> for the free space map,\n <literal>'vm'</literal> for the visibility map, or\n- <literal>'init'</literal> for the initialization fork, and only this\n- specific fork will be verifies, otherwise all forks will. The function\n- returns the list of blocks for which the found checksum doesn't match the\n- expected one. See <xref\n+ <literal>'init'</literal> for the initialization fork.\n+ The function returns a list of blocks for which the computed and stored\n+ checksums don't match. See <xref\n linkend=\"runtime-config-resource-checksum-verification-cost\"/> for\n information on how to configure cost-based verification delay. You must be\n a member of the <literal>pg_read_all_stats</literal> role to use this\ndiff --git a/src/backend/storage/page/checksum.c b/src/backend/storage/page/checksum.c\nindex eb2c919c34..17cd95ec95 100644\n--- a/src/backend/storage/page/checksum.c\n+++ b/src/backend/storage/page/checksum.c\n@@ -36,7 +36,7 @@\n * actual storage, you have to discard the operating system cache before\n * running those functions.\n *\n- * To avoid torn page and possible false positive when reading data, and\n+ * To avoid torn pages and possible false positives when reading data, and to\n * keeping overhead as low as possible, the following heuristics are used:\n *\n * - a shared LWLock is taken on the target buffer pool partition mapping, and\n@@ -92,8 +92,8 @@ check_one_block(Relation relation, ForkNumber forknum, BlockNumber blkno,\n \t*chk_expected = *chk_found = NoComputedChecksum;\n \n \t/*\n-\t * To avoid too much overhead, the buffer will be first read without\n-\t * the locks that would guarantee the lack of false positive, as such\n+\t * To avoid excessive overhead, the buffer will be first read without\n+\t * the locks that would prevent false positives, as such\n \t * events should be quite rare.\n \t */\n Retry:\n@@ -120,10 +120,10 @@ Retry:\n }\n \n /*\n- * Perform a checksum check on the passed page. Returns whether the page is\n+ * Perform a checksum check on the passed page. Return True iff the page is\n * valid or not, and assign the expected and found checksum in chk_expected and\n * chk_found, respectively. Note that a page can look like new but could be\n- * the result of a corruption. We still check for this case, but we can't\n+ * the result of corruption. We still check for this case, but we can't\n * compute its checksum as pg_checksum_page() is explicitly checking for\n * non-new pages, so NoComputedChecksum will be set in chk_found.\n */\n@@ -139,7 +139,7 @@ check_buffer(char *buffer, uint32 blkno, uint16 *chk_expected,\n \tif (PageIsNew(page))\n \t{\n \t\t/*\n-\t\t * Check if the page is really new or if there's a corruption that\n+\t\t * Check if the page is really new or if there's corruption that\n \t\t * affected PageIsNew detection. Note that PageIsVerified won't try to\n \t\t * detect checksum corruption in this case, so there's no risk of\n \t\t * duplicated corruption report.\n@@ -151,7 +151,7 @@ check_buffer(char *buffer, uint32 blkno, uint16 *chk_expected,\n \t\t}\n \n \t\t/*\n-\t\t * There's a corruption, but since this affect PageIsNew, we\n+\t\t * There's corruption, but since this affects PageIsNew, we\n \t\t * can't compute a checksum, so set NoComputedChecksum for the\n \t\t * expected checksum.\n \t\t */\n@@ -218,8 +218,8 @@ check_delay_point(void)\n * held. Reading with this lock is to avoid the unlikely but possible case\n * that a buffer wasn't present in shared buffers when we checked but it then\n * alloc'ed in shared_buffers, modified and flushed concurrently when we\n- * later try to read it, leading to false positive due to torn page. Caller\n- * can read first the buffer without holding the target buffer mapping\n+ * later try to read it, leading to false positives due to a torn page. Caller\n+ * can first read the buffer without holding the target buffer mapping\n * partition LWLock to have an optimistic approach, and reread the buffer\n * from disk in case of error.\n *\n@@ -280,7 +280,7 @@ check_get_buffer(Relation relation, ForkNumber forknum,\n \t\t\tcheckit = false;\n \n \t\t/*\n-\t\t * Read the buffer from disk, taking on IO lock to prevent torn-page\n+\t\t * Read the buffer from disk, taking an IO lock to prevent torn-page\n \t\t * reads, in the unlikely event that it was concurrently dirtied and\n \t\t * flushed.\n \t\t */\n@@ -320,7 +320,7 @@ check_get_buffer(Relation relation, ForkNumber forknum,\n \t/*\n \t * Didn't find it in the buffer pool and didn't read it while holding the\n \t * buffer mapping partition lock. We'll have to try to read it from\n-\t * disk, after releasing the target partition lock to avoid too much\n+\t * disk, after releasing the target partition lock to avoid excessive\n \t * overhead. It means that it's possible to get a torn page later, so\n \t * we'll have to retry with a suitable lock in case of error to avoid\n \t * false positive.\ndiff --git a/src/backend/utils/adt/checksumfuncs.c b/src/backend/utils/adt/checksumfuncs.c\nindex d005b8d01f..fa5823677a 100644\n--- a/src/backend/utils/adt/checksumfuncs.c\n+++ b/src/backend/utils/adt/checksumfuncs.c\n@@ -1,7 +1,7 @@\n /*-------------------------------------------------------------------------\n *\n * checksumfuncs.c\n- *\t Functions for checksums related feature such as online verification\n+ *\t Functions for checksum related feature such as online verification\n *\n * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n * Portions Copyright (c) 1994, Regents of the University of California\n@@ -181,7 +181,7 @@ check_relation_fork(TupleDesc tupdesc, Tuplestorestate *tupstore,\n \t\tif (check_one_block(relation, forknum, blkno, &chk_expected,\n \t\t\t\t\t\t\t&chk_found))\n \t\t{\n-\t\t\t/* Buffer not corrupted or no worth checking, continue */\n+\t\t\t/* Buffer not corrupted or not worth checking, continue */\n \t\t\tcontinue;\n \t\t}\n \n@@ -192,7 +192,7 @@ check_relation_fork(TupleDesc tupdesc, Tuplestorestate *tupstore,\n \t\tvalues[i++] = Int32GetDatum(forknum);\n \t\tvalues[i++] = UInt32GetDatum(blkno);\n \t\t/*\n-\t\t * This can happen if a corruption makes the block appears as\n+\t\t * This can happen if corruption makes the block appears as\n \t\t * PageIsNew() but isn't a new page.\n \t\t */\n \t\tif (chk_expected == NoComputedChecksum)\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 5a51dccca9..57401580c3 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -2383,7 +2383,7 @@ static struct config_int ConfigureNamesInt[] =\n \n \t{\n \t\t{\"checksum_cost_page\", PGC_USERSET, RESOURCES_CHECKSUM_DELAY,\n-\t\t\tgettext_noop(\"Checksum cost for verifying a page found.\"),\n+\t\t\tgettext_noop(\"Checksum cost for verifying a page.\"),\n \t\t\tNULL\n \t\t},\n \t\t&ChecksumCostPage,\ndiff --git a/src/test/check_relation/t/01_checksums_check.pl b/src/test/check_relation/t/01_checksums_check.pl\nindex 1ad34adcb9..2a3f2880ea 100644\n--- a/src/test/check_relation/t/01_checksums_check.pl\n+++ b/src/test/check_relation/t/01_checksums_check.pl\n@@ -218,7 +218,7 @@ $ENV{PGOPTIONS} = '--client-min-messages=WARNING';\n my ($cmdret, $stdout, $stderr) = $node->psql('postgres', \"SELECT\"\n \t. \" current_setting('data_checksums')\");\n \n-is($stdout, 'on', 'Data checksums shoud be enabled');\n+is($stdout, 'on', 'Data checksums should be enabled');\n \n ($cmdret, $stdout, $stderr) = $node->psql('postgres', \"SELECT\"\n \t. \" current_setting('block_size')\");\n@@ -254,7 +254,7 @@ is(check_checksums_call($node, 'u_t1_id_idx'), '1', 'Can check an unlogged index\n \t. \" current_setting('data_directory') || '/' || pg_relation_filepath('t1')\"\n );\n \n-isnt($stdout, '', 'A relfinode should be returned');\n+isnt($stdout, '', 'A relfilenode should be returned');\n \n my $filename = $stdout;\n \n-- \n2.17.0",
"msg_date": "Sun, 12 Jul 2020 12:34:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, Jul 12, 2020 at 12:34:03PM -0500, Justin Pryzby wrote:\n> Small language fixes in comments and user-facing docs.\n\nThanks a lot! I just fixed a small issue (see below), PFA updated v10.\n\n> \n> diff --git a/src/backend/storage/page/checksum.c b/src/backend/storage/page/checksum.c\n> index eb2c919c34..17cd95ec95 100644\n> --- a/src/backend/storage/page/checksum.c\n> +++ b/src/backend/storage/page/checksum.c\n> @@ -36,7 +36,7 @@\n> * actual storage, you have to discard the operating system cache before\n> * running those functions.\n> *\n> - * To avoid torn page and possible false positive when reading data, and\n> + * To avoid torn pages and possible false positives when reading data, and to\n> * keeping overhead as low as possible, the following heuristics are used:\n> *\n\nChanged for \"to keep\".",
"msg_date": "Tue, 14 Jul 2020 11:08:08 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 11:08:08AM +0200, Julien Rouhaud wrote:\n> On Sun, Jul 12, 2020 at 12:34:03PM -0500, Justin Pryzby wrote:\n>> Small language fixes in comments and user-facing docs.\n> \n> Thanks a lot! I just fixed a small issue (see below), PFA updated v10.\n\nSawada-san, you are registered as a reviewer of this patch. Are you\nplanning to look at it? If you are busy lately, that's fine as well\n(congrats!). In this case it could be better to unregister from the\nCF app for this entry.\n\nI am refreshing my mind here, but here are some high-level comments\nfor now...\n\n+#include \"postgres.h\"\n+\n+#include \"access/tupdesc.h\"\n+#include \"common/relpath.h\"\n #include \"storage/block.h\"\n+#include \"utils/relcache.h\"\n+#include \"utils/tuplestore.h\"\n[...]\n+extern bool check_one_block(Relation relation, ForkNumber forknum,\n+ BlockNumber blkno, uint16 *chk_expected,\n+ uint16 *chk_found);\nI don't think that it is a good idea to add this much to checksum.h\nas these are includes coming mainly from the backend. Note that\npg_checksum_page() is a function designed to be also available for\nfrontend tools, with checksum.h something that can be included in\nfrontends. This would mean that we could move all the buffer lookup\nAPIs directly to checksumfuncs.c, or move that into a separate file\ncloser to the location.\n\n+ * A zero checksum can never be computed, see pg_checksum_page() */\n+#define NoComputedChecksum 0\nWouldn't it be better to rename that something like\nInvalidPageChecksum, and make use of it in pg_checksum_page()? It\nwould be more consistent with the naming of transaction IDs, OIDs or\neven XLogRecPtr. And that could be a separate patch.\n\n+++ b/src/test/check_relation/t/01_checksums_check.pl\n@@ -0,0 +1,276 @@\n+use strict;\n+use warnings;\nIt could be better to move that to src/test/modules/, so as it could\nbe picked more easily by MSVC scripts in the future. Note that if we\napply the normal naming convention here this should be named\n001_checksum_check.pl.\n\n+subdir = src/test/check_relation\n+top_builddir = ../../..\n+include $(top_builddir)/src/Makefile.global\nLet's use a Makefile shaped in a way similar to modules/test_misc that\nmakes use of TAP_TESTS = 1. There is the infra, let's rely on it for\nthe regression tests.\n\n+ pg_usleep(msec * 1000L);\nCould it be possible to add a wait event here? It would be nice to be\nable to monitor that in pg_stat_activity.\n\n+if (exists $ENV{MY_PG_REGRESS})\n+{\n+ $ENV{PG_REGRESS} = $ENV{MY_PG_REGRESS};\n+}\nWhat is MY_PG_REGRESS for? A remnant from an external makefile\nperhaps?\n\n+ /*\n+ * If we get a failure and the buffer wasn't found in shared buffers,\n+ * reread the buffer with suitable lock to avoid false positive. See\n+ * check_get_buffer for more details.\n+ */\n+ if (!found_in_sb && !force_lock)\n+ {\n+ force_lock = true;\n+ goto Retry;\n+ }\nAs designed, we have a risk of false positives with a torn page in the\nfirst loop when trying to look for a given buffer as we would try to\nuse smgrread() without a partition lock. This stresses me a bit, and\nfalse positives could scare users easily. Could we consider first a\nsafer approach where we don't do that, and just read the page while\nholding the partition lock? OK, that would be more expensive, but at\nleast that's safe in any case. My memory of this patch is a bit\nfuzzy, but this is itching me and this is the heart of the problem\ndealt with here :)\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 15:58:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Sep 7, 2020 at 8:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> +#include \"postgres.h\"\n> +\n> +#include \"access/tupdesc.h\"\n> +#include \"common/relpath.h\"\n> #include \"storage/block.h\"\n> +#include \"utils/relcache.h\"\n> +#include \"utils/tuplestore.h\"\n> [...]\n> +extern bool check_one_block(Relation relation, ForkNumber forknum,\n> + BlockNumber blkno, uint16 *chk_expected,\n> + uint16 *chk_found);\n> I don't think that it is a good idea to add this much to checksum.h\n> as these are includes coming mainly from the backend. Note that\n> pg_checksum_page() is a function designed to be also available for\n> frontend tools, with checksum.h something that can be included in\n> frontends. This would mean that we could move all the buffer lookup\n> APIs directly to checksumfuncs.c, or move that into a separate file\n> closer to the location.\n\nDid you mean creating a new checksumfuncs.c file? I don't find any\nsuch file in the current tree.\n\n> + * A zero checksum can never be computed, see pg_checksum_page() */\n> +#define NoComputedChecksum 0\n> Wouldn't it be better to rename that something like\n> InvalidPageChecksum, and make use of it in pg_checksum_page()? It\n> would be more consistent with the naming of transaction IDs, OIDs or\n> even XLogRecPtr. And that could be a separate patch.\n\nIt seems quite ambiguous, as checksum validity usually has a different\nmeaning. And in the code added here, the meaning isn't that the\nckecksum is invalid but that there's no checsum as it cannot be\ncomputed due to PageIsNew().\n\n> +++ b/src/test/check_relation/t/01_checksums_check.pl\n> @@ -0,0 +1,276 @@\n> +use strict;\n> +use warnings;\n> It could be better to move that to src/test/modules/, so as it could\n> be picked more easily by MSVC scripts in the future. Note that if we\n> apply the normal naming convention here this should be named\n> 001_checksum_check.pl.\n>\n> +subdir = src/test/check_relation\n> +top_builddir = ../../..\n> +include $(top_builddir)/src/Makefile.global\n> Let's use a Makefile shaped in a way similar to modules/test_misc that\n> makes use of TAP_TESTS = 1. There is the infra, let's rely on it for\n> the regression tests.\n\nWill fix.\n\n> + pg_usleep(msec * 1000L);\n> Could it be possible to add a wait event here? It would be nice to be\n> able to monitor that in pg_stat_activity.\n\nSure, I missed that as this was first implemented as an extension.\n\n> +if (exists $ENV{MY_PG_REGRESS})\n> +{\n> + $ENV{PG_REGRESS} = $ENV{MY_PG_REGRESS};\n> +}\n> What is MY_PG_REGRESS for? A remnant from an external makefile\n> perhaps?\n\nIndeed.\n\n> + /*\n> + * If we get a failure and the buffer wasn't found in shared buffers,\n> + * reread the buffer with suitable lock to avoid false positive. See\n> + * check_get_buffer for more details.\n> + */\n> + if (!found_in_sb && !force_lock)\n> + {\n> + force_lock = true;\n> + goto Retry;\n> + }\n> As designed, we have a risk of false positives with a torn page in the\n> first loop when trying to look for a given buffer as we would try to\n> use smgrread() without a partition lock. This stresses me a bit, and\n> false positives could scare users easily. Could we consider first a\n> safer approach where we don't do that, and just read the page while\n> holding the partition lock? OK, that would be more expensive, but at\n> least that's safe in any case. My memory of this patch is a bit\n> fuzzy, but this is itching me and this is the heart of the problem\n> dealt with here :)\n\nI'm not sure I understand. Unless I missed something this approach\n*cannot* raise a false positive. What it does is force a 2nd check\nwith stronger lock *to make sure it's actually a corruption*, so we\ndon't raise false positive. The only report that can happen in this\n1st loop is if smgread raises an error, which AFAICT can only happen\n(at least with mdread) if the whole block couldn't be read, which is a\nsign of a very bad problem. This should clearly be reported, as this\ncannot be caused by the locking heuristics used here.\n\n\n",
"msg_date": "Mon, 7 Sep 2020 09:38:30 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Sep 07, 2020 at 09:38:30AM +0200, Julien Rouhaud wrote:\n> Did you mean creating a new checksumfuncs.c file? I don't find any\n> such file in the current tree.\n\nYour patch adds checksumfuncs.c, so the subroutines grabbing a given\nblock could just be moved there.\n\n> I'm not sure I understand. Unless I missed something this approach\n> *cannot* raise a false positive. What it does is force a 2nd check\n> with stronger lock *to make sure it's actually a corruption*, so we\n> don't raise false positive. The only report that can happen in this\n> 1st loop is if smgread raises an error, which AFAICT can only happen\n> (at least with mdread) if the whole block couldn't be read, which is a\n> sign of a very bad problem. This should clearly be reported, as this\n> cannot be caused by the locking heuristics used here.\n\nWe don't know how much this optimization matters though? Could it be\npossible to get an idea of that? For example, take the case of one\nrelation with a fixed size in a read-only workload and a read-write\nworkload (as long as autovacuum and updates make the number of\nrelation blocks rather constant for the read-write case), doing a\nchecksum verification in parallel of multiple clients working on the\nrelation concurrently. Assuming that the relation is fully in the OS\ncache, we could get an idea of the impact with multiple\n(shared_buffers / relation size) rates to make the eviction more\naggressive? The buffer partition locks, knowing that\nNUM_BUFFER_PARTITIONS caps that, should be the bottleneck, still it\nseems to me that it would be good to see if we have a difference.\nWhat do you think?\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 17:50:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, 7 Sep 2020 at 15:59, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 14, 2020 at 11:08:08AM +0200, Julien Rouhaud wrote:\n> > On Sun, Jul 12, 2020 at 12:34:03PM -0500, Justin Pryzby wrote:\n> >> Small language fixes in comments and user-facing docs.\n> >\n> > Thanks a lot! I just fixed a small issue (see below), PFA updated v10.\n>\n> Sawada-san, you are registered as a reviewer of this patch. Are you\n> planning to look at it? If you are busy lately, that's fine as well\n> (congrats!).\n\nThanks!\n\n> In this case it could be better to unregister from the\n> CF app for this entry.\n\nWell, I sent review comments on this patch and Julien fixed all\ncomments. So I’d marked this as Ready for Committer since I didn't\nhave further comments at that time, and I was waiting for the\ncommitter review. I'll look at this patch again but should I remove my\nname from the reviewer after that if no comments?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 8 Sep 2020 11:36:45 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Sep 08, 2020 at 11:36:45AM +0900, Masahiko Sawada wrote:\n> On Mon, 7 Sep 2020 at 15:59, Michael Paquier <michael@paquier.xyz> wrote:\n>> In this case it could be better to unregister from the\n>> CF app for this entry.\n> \n> Well, I sent review comments on this patch and Julien fixed all\n> comments. So I’d marked this as Ready for Committer since I didn't\n> have further comments at that time, and I was waiting for the\n> committer review. I'll look at this patch again but should I remove my\n> name from the reviewer after that if no comments?\n\nAh, sorry, I somewhat missed the previous status of the patch.\nPerhaps that's an overdose of CF. Keeping your name as reviewer is\nfine I guess. I have begun looking at the patch and spotted some\nissues, so let's see where we do from here.\n--\nMichael",
"msg_date": "Tue, 8 Sep 2020 12:10:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Sep 07, 2020 at 05:50:38PM +0900, Michael Paquier wrote:\n> On Mon, Sep 07, 2020 at 09:38:30AM +0200, Julien Rouhaud wrote:\n> > Did you mean creating a new checksumfuncs.c file? I don't find any\n> > such file in the current tree.\n> \n> Your patch adds checksumfuncs.c, so the subroutines grabbing a given\n> block could just be moved there.\n> \n\nSorry, I was in the middle of a rebase for another patch and missed the new\nfiles added in this one. I added a new checksumfuncs.h for the required\ninclude that should not be seen by client code. I kept checksumfuncs.c and\nchecksums.c so that the SQL visible declaration are separated from the rest of\nthe implementation as this is what we already do elsewhere I think. If that's\na problem I'll change and put everything in checksumfuncs.[ch].\n\nI also moved the tap tests in src/test/modules and renamed the file with a 3\ndigits. For the record I initially copied src/test/modules/brin, and this is\napparently the only subdir that has a 2 digits pattern.\n\nI also added a new WAIT_EVENT_CHECK_DELAY wait event.\n\n> > I'm not sure I understand. Unless I missed something this approach\n> > *cannot* raise a false positive. What it does is force a 2nd check\n> > with stronger lock *to make sure it's actually a corruption*, so we\n> > don't raise false positive. The only report that can happen in this\n> > 1st loop is if smgread raises an error, which AFAICT can only happen\n> > (at least with mdread) if the whole block couldn't be read, which is a\n> > sign of a very bad problem. This should clearly be reported, as this\n> > cannot be caused by the locking heuristics used here.\n> \n> We don't know how much this optimization matters though? Could it be\n> possible to get an idea of that? For example, take the case of one\n> relation with a fixed size in a read-only workload and a read-write\n> workload (as long as autovacuum and updates make the number of\n> relation blocks rather constant for the read-write case), doing a\n> checksum verification in parallel of multiple clients working on the\n> relation concurrently. Assuming that the relation is fully in the OS\n> cache, we could get an idea of the impact with multiple\n> (shared_buffers / relation size) rates to make the eviction more\n> aggressive? The buffer partition locks, knowing that\n> NUM_BUFFER_PARTITIONS caps that, should be the bottleneck, still it\n> seems to me that it would be good to see if we have a difference.\n> What do you think?\n\nI assumed that the odds of having to check the buffer twice were so low, and\navoiding to keep a bufmapping lock while doing some IO was an uncontroversial\nenough optimisation, but maybe that's only wishful thinking.\n\nI'll do some becnhmarking and see if I can get some figures, but it'll probably\ntake some time. In the meantime I'm attaching v11 of the patch that should\naddress all other comments.",
"msg_date": "Wed, 9 Sep 2020 11:25:24 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Sep 09, 2020 at 11:25:24AM +0200, Julien Rouhaud wrote:\n> I assumed that the odds of having to check the buffer twice were so low, and\n> avoiding to keep a bufmapping lock while doing some IO was an uncontroversial\n> enough optimisation, but maybe that's only wishful thinking.\n\nPerhaps it is worth it, so it would be good to make sure of it and see\nif that's actually worth the extra complication. This also depends if\nthe page is in the OS cache if the page is not in shared buffers,\nmeaning that smgrread() is needed to fetch the page to check. I would\nbe more curious to see if there is an actual difference if the page is\nthe OS cache.\n\n> I'll do some becnhmarking and see if I can get some figures, but it'll probably\n> take some time. In the meantime I'm attaching v11 of the patch that should\n> address all other comments.\n\nThanks.\n\nAnother thing that was itching me is the introduction of 3 GUCs with\none new category for the sake of two SQL functions. For VACUUM we\nhave many things relying on the GUC delays, with autovacuum and manual\nvacuum. Perhaps it would make sense to have these if we have some day\na background worker doing checksum verifications, still that could\nperfectly be in contrib/, and that would be kind of hard to tune as\nwell. The patch enabling checksums on-the-fly could also be a reason\ngood enough. Another thing we could consider is to pass down those\nparameters as function arguments, at the cost of not being able to\nreload them.\n--\nMichael",
"msg_date": "Wed, 9 Sep 2020 21:37:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Sep 9, 2020 at 2:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Another thing that was itching me is the introduction of 3 GUCs with\n> one new category for the sake of two SQL functions. For VACUUM we\n> have many things relying on the GUC delays, with autovacuum and manual\n> vacuum. Perhaps it would make sense to have these if we have some day\n> a background worker doing checksum verifications, still that could\n> perfectly be in contrib/, and that would be kind of hard to tune as\n> well. The patch enabling checksums on-the-fly could also be a reason\n> good enough. Another thing we could consider is to pass down those\n> parameters as function arguments, at the cost of not being able to\n> reload them.\n\nI'm not terribly happy with adding that for now, but it's quite clear\nthat there'll eventually be a lot of new stuff added that will benefit\nfrom either the category or the GUC. FTR once we reach an agreement\non how to do this check (I'm wondering if it'll stay an SQL function\nor become a plain backend command, in which case GUCs would be\nmandatory), I'll also be happy to work on a background worker to help\npeople running the check regularly. So in my opinion it's better to\nadd them now so we won't have to change the sql function definition\nlater when other facilities will rely on them.\n\n\n",
"msg_date": "Wed, 9 Sep 2020 14:52:17 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Sep 09, 2020 at 11:25:29AM +0200, Julien Rouhaud wrote:\n> \n> I'll do some becnhmarking and see if I can get some figures, but it'll probably\n> take some time. In the meantime I'm attaching v11 of the patch that should\n> address all other comments.\n\nI just realized that I forgot to update one of the Makefile when moving the TAP\ntest folder. v12 attached should fix that.",
"msg_date": "Wed, 9 Sep 2020 15:41:30 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Sep 09, 2020 at 03:41:30PM +0200, Julien Rouhaud wrote:\n> On Wed, Sep 09, 2020 at 11:25:29AM +0200, Julien Rouhaud wrote:\n> > \n> > I'll do some becnhmarking and see if I can get some figures, but it'll probably\n> > take some time. In the meantime I'm attaching v11 of the patch that should\n> > address all other comments.\n> \n> I just realized that I forgot to update one of the Makefile when moving the TAP\n> test folder. v12 attached should fix that.\n\n\nAnd the cfbot just reported a new error for Windows build. Attached v13 should\nfix that.",
"msg_date": "Thu, 10 Sep 2020 09:47:23 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Sep 10, 2020 at 09:47:23AM +0200, Julien Rouhaud wrote:\n> On Wed, Sep 09, 2020 at 03:41:30PM +0200, Julien Rouhaud wrote:\n> > On Wed, Sep 09, 2020 at 11:25:29AM +0200, Julien Rouhaud wrote:\n> > > \n> > > I'll do some becnhmarking and see if I can get some figures, but it'll probably\n> > > take some time. In the meantime I'm attaching v11 of the patch that should\n> > > address all other comments.\n> > \n> > I just realized that I forgot to update one of the Makefile when moving the TAP\n> > test folder. v12 attached should fix that.\n> \n> \n> And the cfbot just reported a new error for Windows build. Attached v13 should\n> fix that.\n\n\nI did some benchmarking using the following environnment:\n\n- 16MB shared buffers\n- 490MB table (10M rows)\n- synchronized_seqscan to off\n- table in OS cache\n\nI don't have a big machine so I went with a very small shared_buffers and a\nsmall table, to make sure that all data is in OS cache but the table more than\nan order bigger than the shared_buffers, to simulate some plausible environment.\n\nI used a simple read only query that performs a sequential scan of the table (a\nsimple SELECT * FROM table), run using 10 concurrent connections, 5 runs of 700\nseconds. I did that without any other activity, with a \\watch of the original\npg_check_relation function using \\watch .1, and a modified version of that\nfunction without the optimisation, still with a \\watch .1\n\nThe TPS is obviously overall extremely bad, but I can see that the submitted\nversion added an overhead of ~3.9% (average of 5 runs), while the version\nwithout the optimisation added an overhead of ~6.57%.\n\nThis is supposed to be a relatively fair benchmark as all the data are cached\non the OS side, so IO done while holding the bufmapping lock aren't too long,\nbut we can see that we already get a non negligible benefit from this\noptimisation. Should I do additional benchmarking, like dropping the OS cache\nand/or adding some write activity? This would probably only make the\nunoptimized version perform even worse.\n\n\n",
"msg_date": "Thu, 10 Sep 2020 20:06:10 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Sep 10, 2020 at 08:06:10PM +0200, Julien Rouhaud wrote:\n> The TPS is obviously overall extremely bad, but I can see that the submitted\n> version added an overhead of ~3.9% (average of 5 runs), while the version\n> without the optimisation added an overhead of ~6.57%.\n\nOkay, so that stands as a difference. I am planning to run some\nbenchmarks on my end as well, and see if I can see a clear\ndifference.\n\n> This is supposed to be a relatively fair benchmark as all the data are cached\n> on the OS side, so IO done while holding the bufmapping lock aren't too long,\n> but we can see that we already get a non negligible benefit from this\n> optimisation. Should I do additional benchmarking, like dropping the OS cache\n> and/or adding some write activity? This would probably only make the\n> unoptimized version perform even worse.\n\nIt would be also interesting to see the case where the pages are not\nin the OS cache and see how bad it can get. For the read-write case,\nI am not sure as we may have some different overhead that hide the\nnoise. Also, did you run your tests with the functions scanning at\nfull speed, with (ChecksumCostDelay < 0) so as there is no throttling?\n--\nMichael",
"msg_date": "Fri, 11 Sep 2020 16:34:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Sep 11, 2020 at 9:34 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 10, 2020 at 08:06:10PM +0200, Julien Rouhaud wrote:\n> > The TPS is obviously overall extremely bad, but I can see that the submitted\n> > version added an overhead of ~3.9% (average of 5 runs), while the version\n> > without the optimisation added an overhead of ~6.57%.\n>\n> Okay, so that stands as a difference. I am planning to run some\n> benchmarks on my end as well, and see if I can see a clear\n> difference.\n\nThanks!\n\n> > This is supposed to be a relatively fair benchmark as all the data are cached\n> > on the OS side, so IO done while holding the bufmapping lock aren't too long,\n> > but we can see that we already get a non negligible benefit from this\n> > optimisation. Should I do additional benchmarking, like dropping the OS cache\n> > and/or adding some write activity? This would probably only make the\n> > unoptimized version perform even worse.\n>\n> It would be also interesting to see the case where the pages are not\n> in the OS cache and see how bad it can get. For the read-write case,\n> I am not sure as we may have some different overhead that hide the\n> noise. Also, did you run your tests with the functions scanning at\n> full speed, with (ChecksumCostDelay < 0) so as there is no throttling?\n\nI used all default settings, but by default checksum_cost_delay is 0\nso there shouldn't be any throttling.\n\n\n",
"msg_date": "Fri, 11 Sep 2020 09:49:16 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Sep 11, 2020 at 09:49:16AM +0200, Julien Rouhaud wrote:\n> Thanks!\n\nI got some numbers out of my pocket, using the following base\nconfiguration:\nwal_level = minimal\nfsync = off\nshared_buffers = '300MB' # also tested with 30MB and 60MB\nchecksum_cost_delay = 0 # default in patch\n\nAnd for the test I have used pgbench initialized at a scale of 250, to\nhave close to 3.5GB of data, so as it gives us a sort of 90% buffer\neviction, with all the data cached in the OS (we may want to look as\nwell at the case where the OS cache does not hold all the relation\npages). I have also run some tests with 30MB and 60MB of shared\nbuffers, for similar results.\n\nI also applied some prewarming on the cluster:\ncreate extension pg_prewarm\nselect pg_prewarm(oid) from pg_class where oid > 16000;\n\nThen, I have done five runs using that:\npgbench -S -M prepared -c 64/128/256 -n -T 60\nIn parallel of that, I got this stuff running in parallel all the\ntime:\nselect pg_check_relation('pgbench_accounts');\n\\watch 0.1 \n\nHere are some TPS numbers with the execution time of pg_check_relation.\nIn the five runs, I removed the highest and lowest ones, then took an\naverage of the remaining three. I have also tested two modes: with\nand without the optimization, that requires a one-liner in checksum.c\nas per your latest patch:\n--- a/src/backend/storage/page/checksum.c\n+++ b/src/backend/storage/page/checksum.c\n@@ -84,7 +84,7 @@ check_one_block(Relation relation, ForkNumber forknum, BlockNumber blkno,\n uint16 *chk_expected, uint16 *chk_found)\n {\n char buffer[BLCKSZ];\n- bool force_lock = false;\n+ bool force_lock = true;\n bool found_in_sb;\n\nWithin parenthesis is the amount of time taken by pg_relation_check()\nfor a single check. This is not completely exact and I saw some\nvariations, just to give an impression:\nConns 64 128 256\nforce_lock=true\t 60590 (7~8s) 55652 (10.2~10.5s) 46812 (9~10s)\nforce_lock=false 58637 (5s) 54131 (6~7s) 37091 (1.1~1.2s)\n\nFor connections higher than 128, I was kind of surprised to see\npg_relation_check being more aggressive without the optimization, with\nmuch less contention on the buffer mapping LWLock actually, but that's\nan impression from looking at pg_stat_activity.\n\nLooking at the wait events for each run, I saw much more hiccups with\nthe buffer mapping LWLock when forcing the lock rather than not, still\nI was able to see some contention when also not forcing the lock. Not\nsurprising as this rejects a bunch of pages from shared buffers.\n\n> I used all default settings, but by default checksum_cost_delay is 0\n> so there shouldn't be any throttling.\n\nThanks, so did I. From what I can see, there could be as well\nbenefits in not using the optimization by default so as the relation\ncheck applies some natural throttling by making the checks actually\nslower (there is a link between the individual runtime of\npg_relation_time and the TPS). So it also seems to me that the\nthrottling parameters would be beneficial, but it looks to me that\nthere is as well a point to not include any throttling in a first\nversion if the optimization to go full speed is not around. Using\nthree new GUCs for those function calls is still too much IMO, so\nthere is also the argument to move all this stuff into a new contrib/\nmodule, and have a bgworker implementation as part of it as it would\nneed shared_preload_libraries anyway.\n\nAlso, I have been putting some thoughts into the APIs able to fetch a\nbuffer without going through the shared buffers. And neither\nchecksum.c, because it should be a place that those APIs depends on\nand include only the most-internal logic for checksum algorithm and\ncomputation, nor checksumfuncs.c, because we need to think about the\ncase of a background worker as well (that could spawn a set of dynamic\nworkers connecting to different databases able to do checksum\nverifications?). It would be good to keep the handling of the buffer\nmapping lock as well as the calls to smgrread() into a single place.\nReadBuffer_common() is a natural place for that, though it means for\nour use case the addition of three new options:\n- Being able to pass down directly a buffer pointer to save the page\ncontents.\n- Being able to not verify directly a page, leaving the verification\nto the caller upthread.\n- Addition of a new mode, that I am calling here RBM_PRIVATE, where we\nactually read the page from disk if not yet in shared buffers, except\nthat we fill in the contents of the page using the pointer given by\nthe caller. That's different than the use of local buffers, as we\ndon't need this much amount of complications like temporary tables of\ncourse for per-page checks.\n\nAnother idea would be to actually just let ReadBuffer_common just do\nthe check by itself, with a different mode like a kind of\nRBM_VALIDATE, where we just return a verification state of the page\nthat can be consumed by callers.\n\nThis also comes with some more advantages:\n- Tracking of reads from disk with track_io_timing.\n- Addition of some private stats dedicated to this private mode, with\nnew fields in pgBufferUsage, all in a single place\n--\nMichael",
"msg_date": "Wed, 16 Sep 2020 12:45:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Wed, Sep 16, 2020 at 11:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 11, 2020 at 09:49:16AM +0200, Julien Rouhaud wrote:\n> > Thanks!\n>\n> I got some numbers out of my pocket, using the following base\n> configuration:\n> [...]\n>\n> Within parenthesis is the amount of time taken by pg_relation_check()\n> for a single check. This is not completely exact and I saw some\n> variations, just to give an impression:\n> Conns 64 128 256\n> force_lock=true 60590 (7~8s) 55652 (10.2~10.5s) 46812 (9~10s)\n> force_lock=false 58637 (5s) 54131 (6~7s) 37091 (1.1~1.2s)\n>\n> For connections higher than 128, I was kind of surprised to see\n> pg_relation_check being more aggressive without the optimization, with\n> much less contention on the buffer mapping LWLock actually, but that's\n> an impression from looking at pg_stat_activity.\n>\n> Looking at the wait events for each run, I saw much more hiccups with\n> the buffer mapping LWLock when forcing the lock rather than not, still\n> I was able to see some contention when also not forcing the lock. Not\n> surprising as this rejects a bunch of pages from shared buffers.\n>\n> > I used all default settings, but by default checksum_cost_delay is 0\n> > so there shouldn't be any throttling.\n>\n> Thanks, so did I. From what I can see, there could be as well\n> benefits in not using the optimization by default so as the relation\n> check applies some natural throttling by making the checks actually\n> slower (there is a link between the individual runtime of\n> pg_relation_time and the TPS).\n\nThanks a lot for the tests! I'm not surprised that forcing the lock\nwill slow down the pg_check_relation() execution, but I'm a bit\nsurprised that holding the buffer mapping lock longer in a workload\nthat has a lot of evictions actually makes things faster. Do you have\nany idea why that's the case?\n\n> So it also seems to me that the\n> throttling parameters would be beneficial, but it looks to me that\n> there is as well a point to not include any throttling in a first\n> version if the optimization to go full speed is not around. Using\n> three new GUCs for those function calls is still too much IMO\n\nI'm assuming that you prefer to remove both the optimization and the\nthrottling part? I'll do that with the next version unless there's\nobjections.\n\n>, so\n> there is also the argument to move all this stuff into a new contrib/\n> module, and have a bgworker implementation as part of it as it would\n> need shared_preload_libraries anyway.\n>\n> Also, I have been putting some thoughts into the APIs able to fetch a\n> buffer without going through the shared buffers. And neither\n> checksum.c, because it should be a place that those APIs depends on\n> and include only the most-internal logic for checksum algorithm and\n> computation, nor checksumfuncs.c, because we need to think about the\n> case of a background worker as well (that could spawn a set of dynamic\n> workers connecting to different databases able to do checksum\n> verifications?). It would be good to keep the handling of the buffer\n> mapping lock as well as the calls to smgrread() into a single place.\n> ReadBuffer_common() is a natural place for that, though it means for\n> our use case the addition of three new options:\n> - Being able to pass down directly a buffer pointer to save the page\n> contents.\n> - Being able to not verify directly a page, leaving the verification\n> to the caller upthread.\n> - Addition of a new mode, that I am calling here RBM_PRIVATE, where we\n> actually read the page from disk if not yet in shared buffers, except\n> that we fill in the contents of the page using the pointer given by\n> the caller. That's different than the use of local buffers, as we\n> don't need this much amount of complications like temporary tables of\n> course for per-page checks.\n>\n> Another idea would be to actually just let ReadBuffer_common just do\n> the check by itself, with a different mode like a kind of\n> RBM_VALIDATE, where we just return a verification state of the page\n> that can be consumed by callers.\n>\n> This also comes with some more advantages:\n> - Tracking of reads from disk with track_io_timing.\n> - Addition of some private stats dedicated to this private mode, with\n> new fields in pgBufferUsage, all in a single place\n\nI agree that putting the code nearby ReadBuffer_common() would be a\ngood idea. However, that means that I can't move all the code to\ncontrib/ I'm wondering what you'd like to see going there. I can see\nsome values in also having the SQL functions available in core rather\nthan contrib, e.g. if you need to quickly check a relation on a\nstandby, so without requiring to create the extension on the primary\nnode first.\n\n Then, I'm a bit worried about adding this code in ReadBuffer_common.\nWhat this code does is quite different, and I'm afraid that it'll make\nReadBuffer_common more complex than needed, which is maybe not a good\nidea for something as critical as this function.\n\nWhat do you think?\n\n\n",
"msg_date": "Fri, 25 Sep 2020 18:11:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Sep 25, 2020 at 06:11:47PM +0800, Julien Rouhaud wrote:\n> Thanks a lot for the tests! I'm not surprised that forcing the lock\n> will slow down the pg_check_relation() execution, but I'm a bit\n> surprised that holding the buffer mapping lock longer in a workload\n> that has a lot of evictions actually makes things faster. Do you have\n> any idea why that's the case?\n\nThat's still a bit unclear to me, but I have not spent much time\nthinking about this particular point either.\n\n> I'm assuming that you prefer to remove both the optimization and the\n> throttling part? I'll do that with the next version unless there's\n> objections.\n\nYeah, any tests I have done tends to show that. It would be good to\nalso check some perf profiles here, at least for the process running\nthe relation check in a loop.\n\n> I agree that putting the code nearby ReadBuffer_common() would be a\n> good idea. However, that means that I can't move all the code to\n> contrib/ I'm wondering what you'd like to see going there. I can see\n> some values in also having the SQL functions available in core rather\n> than contrib, e.g. if you need to quickly check a relation on a\n> standby, so without requiring to create the extension on the primary\n> node first.\n\nGood point. This could make the user experience worse.\n\n> Then, I'm a bit worried about adding this code in ReadBuffer_common.\n> What this code does is quite different, and I'm afraid that it'll make\n> ReadBuffer_common more complex than needed, which is maybe not a good\n> idea for something as critical as this function.\n> \n> What do you think?\n\nYeah, I have been looking at ReadBuffer_common() and it is true that\nit is complicated enough so we may not really need an extra mode or\nmore options, for a final logic that is actually different than what a\nbuffer read does: we just want to know if a page has a valid checksum\nor not. An idea that I got here would be to add a new, separate\nfunction to do the page check directly in bufmgr.c, but that's what\nyou mean. Now only the prefetch routine and ReadBuffer_common use\npartition locks, but getting that done in the same file looks like a\ngood compromise to me. It would be also possible to keep the BLCKSZ\nbuffer used to check the page directly in this routine, so as any\ncaller willing to do a check don't need to worry about any\nallocation.\n--\nMichael",
"msg_date": "Thu, 1 Oct 2020 14:07:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Oct 1, 2020 at 1:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 25, 2020 at 06:11:47PM +0800, Julien Rouhaud wrote:\n> > Thanks a lot for the tests! I'm not surprised that forcing the lock\n> > will slow down the pg_check_relation() execution, but I'm a bit\n> > surprised that holding the buffer mapping lock longer in a workload\n> > that has a lot of evictions actually makes things faster. Do you have\n> > any idea why that's the case?\n>\n> That's still a bit unclear to me, but I have not spent much time\n> thinking about this particular point either.\n>\n> > I'm assuming that you prefer to remove both the optimization and the\n> > throttling part? I'll do that with the next version unless there's\n> > objections.\n>\n> Yeah, any tests I have done tends to show that. It would be good to\n> also check some perf profiles here, at least for the process running\n> the relation check in a loop.\n>\n> > I agree that putting the code nearby ReadBuffer_common() would be a\n> > good idea. However, that means that I can't move all the code to\n> > contrib/ I'm wondering what you'd like to see going there. I can see\n> > some values in also having the SQL functions available in core rather\n> > than contrib, e.g. if you need to quickly check a relation on a\n> > standby, so without requiring to create the extension on the primary\n> > node first.\n>\n> Good point. This could make the user experience worse.\n>\n> > Then, I'm a bit worried about adding this code in ReadBuffer_common.\n> > What this code does is quite different, and I'm afraid that it'll make\n> > ReadBuffer_common more complex than needed, which is maybe not a good\n> > idea for something as critical as this function.\n> >\n> > What do you think?\n>\n> Yeah, I have been looking at ReadBuffer_common() and it is true that\n> it is complicated enough so we may not really need an extra mode or\n> more options, for a final logic that is actually different than what a\n> buffer read does: we just want to know if a page has a valid checksum\n> or not. An idea that I got here would be to add a new, separate\n> function to do the page check directly in bufmgr.c, but that's what\n> you mean. Now only the prefetch routine and ReadBuffer_common use\n> partition locks, but getting that done in the same file looks like a\n> good compromise to me. It would be also possible to keep the BLCKSZ\n> buffer used to check the page directly in this routine, so as any\n> caller willing to do a check don't need to worry about any\n> allocation.\n\nI made all the suggested modifications in attached v14:\n\n- moved the C code in bufmgr.c nearby ReadBuffer\n- removed the GUC and throttling options\n- removed the dubious optimization\n\nAll documentation and comments are updated to reflect those changes.\nI also split the commit in two, one for the backend infrastructure and\none for the SQL wrappers.",
"msg_date": "Thu, 15 Oct 2020 13:37:26 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Oct 15, 2020 at 1:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Oct 1, 2020 at 1:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Sep 25, 2020 at 06:11:47PM +0800, Julien Rouhaud wrote:\n> > > Thanks a lot for the tests! I'm not surprised that forcing the lock\n> > > will slow down the pg_check_relation() execution, but I'm a bit\n> > > surprised that holding the buffer mapping lock longer in a workload\n> > > that has a lot of evictions actually makes things faster. Do you have\n> > > any idea why that's the case?\n> >\n> > That's still a bit unclear to me, but I have not spent much time\n> > thinking about this particular point either.\n> >\n> > > I'm assuming that you prefer to remove both the optimization and the\n> > > throttling part? I'll do that with the next version unless there's\n> > > objections.\n> >\n> > Yeah, any tests I have done tends to show that. It would be good to\n> > also check some perf profiles here, at least for the process running\n> > the relation check in a loop.\n> >\n> > > I agree that putting the code nearby ReadBuffer_common() would be a\n> > > good idea. However, that means that I can't move all the code to\n> > > contrib/ I'm wondering what you'd like to see going there. I can see\n> > > some values in also having the SQL functions available in core rather\n> > > than contrib, e.g. if you need to quickly check a relation on a\n> > > standby, so without requiring to create the extension on the primary\n> > > node first.\n> >\n> > Good point. This could make the user experience worse.\n> >\n> > > Then, I'm a bit worried about adding this code in ReadBuffer_common.\n> > > What this code does is quite different, and I'm afraid that it'll make\n> > > ReadBuffer_common more complex than needed, which is maybe not a good\n> > > idea for something as critical as this function.\n> > >\n> > > What do you think?\n> >\n> > Yeah, I have been looking at ReadBuffer_common() and it is true that\n> > it is complicated enough so we may not really need an extra mode or\n> > more options, for a final logic that is actually different than what a\n> > buffer read does: we just want to know if a page has a valid checksum\n> > or not. An idea that I got here would be to add a new, separate\n> > function to do the page check directly in bufmgr.c, but that's what\n> > you mean. Now only the prefetch routine and ReadBuffer_common use\n> > partition locks, but getting that done in the same file looks like a\n> > good compromise to me. It would be also possible to keep the BLCKSZ\n> > buffer used to check the page directly in this routine, so as any\n> > caller willing to do a check don't need to worry about any\n> > allocation.\n>\n> I made all the suggested modifications in attached v14:\n>\n> - moved the C code in bufmgr.c nearby ReadBuffer\n> - removed the GUC and throttling options\n> - removed the dubious optimization\n>\n> All documentation and comments are updated to reflect those changes.\n> I also split the commit in two, one for the backend infrastructure and\n> one for the SQL wrappers.\n\nAnd I did miss a reference in the sgml documentation, fixed in v15.",
"msg_date": "Thu, 15 Oct 2020 15:59:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Oct 15, 2020 at 3:59 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Oct 15, 2020 at 1:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Thu, Oct 1, 2020 at 1:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Fri, Sep 25, 2020 at 06:11:47PM +0800, Julien Rouhaud wrote:\n> > > > Thanks a lot for the tests! I'm not surprised that forcing the lock\n> > > > will slow down the pg_check_relation() execution, but I'm a bit\n> > > > surprised that holding the buffer mapping lock longer in a workload\n> > > > that has a lot of evictions actually makes things faster. Do you have\n> > > > any idea why that's the case?\n> > >\n> > > That's still a bit unclear to me, but I have not spent much time\n> > > thinking about this particular point either.\n> > >\n> > > > I'm assuming that you prefer to remove both the optimization and the\n> > > > throttling part? I'll do that with the next version unless there's\n> > > > objections.\n> > >\n> > > Yeah, any tests I have done tends to show that. It would be good to\n> > > also check some perf profiles here, at least for the process running\n> > > the relation check in a loop.\n> > >\n> > > > I agree that putting the code nearby ReadBuffer_common() would be a\n> > > > good idea. However, that means that I can't move all the code to\n> > > > contrib/ I'm wondering what you'd like to see going there. I can see\n> > > > some values in also having the SQL functions available in core rather\n> > > > than contrib, e.g. if you need to quickly check a relation on a\n> > > > standby, so without requiring to create the extension on the primary\n> > > > node first.\n> > >\n> > > Good point. This could make the user experience worse.\n> > >\n> > > > Then, I'm a bit worried about adding this code in ReadBuffer_common.\n> > > > What this code does is quite different, and I'm afraid that it'll make\n> > > > ReadBuffer_common more complex than needed, which is maybe not a good\n> > > > idea for something as critical as this function.\n> > > >\n> > > > What do you think?\n> > >\n> > > Yeah, I have been looking at ReadBuffer_common() and it is true that\n> > > it is complicated enough so we may not really need an extra mode or\n> > > more options, for a final logic that is actually different than what a\n> > > buffer read does: we just want to know if a page has a valid checksum\n> > > or not. An idea that I got here would be to add a new, separate\n> > > function to do the page check directly in bufmgr.c, but that's what\n> > > you mean. Now only the prefetch routine and ReadBuffer_common use\n> > > partition locks, but getting that done in the same file looks like a\n> > > good compromise to me. It would be also possible to keep the BLCKSZ\n> > > buffer used to check the page directly in this routine, so as any\n> > > caller willing to do a check don't need to worry about any\n> > > allocation.\n> >\n> > I made all the suggested modifications in attached v14:\n> >\n> > - moved the C code in bufmgr.c nearby ReadBuffer\n> > - removed the GUC and throttling options\n> > - removed the dubious optimization\n> >\n> > All documentation and comments are updated to reflect those changes.\n> > I also split the commit in two, one for the backend infrastructure and\n> > one for the SQL wrappers.\n>\n> And I did miss a reference in the sgml documentation, fixed in v15.\n\nI forgot to add the modified file in the previous attachment, sorry\nfor the noise.",
"msg_date": "Fri, 16 Oct 2020 08:59:38 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Oct 16, 2020 at 8:59 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Oct 15, 2020 at 3:59 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Thu, Oct 15, 2020 at 1:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 1, 2020 at 1:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > > > On Fri, Sep 25, 2020 at 06:11:47PM +0800, Julien Rouhaud wrote:\n> > > > > Thanks a lot for the tests! I'm not surprised that forcing the lock\n> > > > > will slow down the pg_check_relation() execution, but I'm a bit\n> > > > > surprised that holding the buffer mapping lock longer in a workload\n> > > > > that has a lot of evictions actually makes things faster. Do you have\n> > > > > any idea why that's the case?\n> > > >\n> > > > That's still a bit unclear to me, but I have not spent much time\n> > > > thinking about this particular point either.\n> > > >\n> > > > > I'm assuming that you prefer to remove both the optimization and the\n> > > > > throttling part? I'll do that with the next version unless there's\n> > > > > objections.\n> > > >\n> > > > Yeah, any tests I have done tends to show that. It would be good to\n> > > > also check some perf profiles here, at least for the process running\n> > > > the relation check in a loop.\n> > > >\n> > > > > I agree that putting the code nearby ReadBuffer_common() would be a\n> > > > > good idea. However, that means that I can't move all the code to\n> > > > > contrib/ I'm wondering what you'd like to see going there. I can see\n> > > > > some values in also having the SQL functions available in core rather\n> > > > > than contrib, e.g. if you need to quickly check a relation on a\n> > > > > standby, so without requiring to create the extension on the primary\n> > > > > node first.\n> > > >\n> > > > Good point. This could make the user experience worse.\n> > > >\n> > > > > Then, I'm a bit worried about adding this code in ReadBuffer_common.\n> > > > > What this code does is quite different, and I'm afraid that it'll make\n> > > > > ReadBuffer_common more complex than needed, which is maybe not a good\n> > > > > idea for something as critical as this function.\n> > > > >\n> > > > > What do you think?\n> > > >\n> > > > Yeah, I have been looking at ReadBuffer_common() and it is true that\n> > > > it is complicated enough so we may not really need an extra mode or\n> > > > more options, for a final logic that is actually different than what a\n> > > > buffer read does: we just want to know if a page has a valid checksum\n> > > > or not. An idea that I got here would be to add a new, separate\n> > > > function to do the page check directly in bufmgr.c, but that's what\n> > > > you mean. Now only the prefetch routine and ReadBuffer_common use\n> > > > partition locks, but getting that done in the same file looks like a\n> > > > good compromise to me. It would be also possible to keep the BLCKSZ\n> > > > buffer used to check the page directly in this routine, so as any\n> > > > caller willing to do a check don't need to worry about any\n> > > > allocation.\n> > >\n> > > I made all the suggested modifications in attached v14:\n> > >\n> > > - moved the C code in bufmgr.c nearby ReadBuffer\n> > > - removed the GUC and throttling options\n> > > - removed the dubious optimization\n> > >\n> > > All documentation and comments are updated to reflect those changes.\n> > > I also split the commit in two, one for the backend infrastructure and\n> > > one for the SQL wrappers.\n> >\n> > And I did miss a reference in the sgml documentation, fixed in v15.\n>\n> I forgot to add the modified file in the previous attachment, sorry\n> for the noise.\n\nAnd Michael just told me that I also missed adding one of the C files\nwhile splitting the patch into two.",
"msg_date": "Fri, 16 Oct 2020 09:22:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Oct 16, 2020 at 09:22:02AM +0800, Julien Rouhaud wrote:\n> And Michael just told me that I also missed adding one of the C files\n> while splitting the patch into two.\n\n+ if (PageIsNew(page))\n+ {\n+ /*\n+ * Check if the page is really new or if there's corruption that\n+ * affected PageIsNew detection. Note that PageIsVerified won't try to\n+ * detect checksum corruption in this case, so there's no risk of\n+ * duplicated corruption report.\n+ */\n+ if (PageIsVerified(page, blkno))\n+ {\n+ /* No corruption. */\n+ return true;\n+ }\nPlease note that this part of your patch overlaps with a proposal for\na bug fix related to zero-only pages with the checksum verification of\nbase backups:\nhttps://www.postgresql.org/message-id/608f3476-0598-2514-2c03-e05c7d2b0cbd@postgrespro.ru\n\nYour patch is trying to adapt itself to the existing logic we have in\nPageIsVerified() so as you don't get a duplicated report, as does the\nbase backup part. Note that I cannot find in the wild any open code\nmaking use of PageIsVerified(), but I'd like to believe that it is\nrather handy to use for table AMs at least (?), so if we can avoid any\nuseless ABI breakage, it may be better to have a new\nPageIsVerifiedExtended() able to take additional options, one to\nreport to pgstat and one to generate this elog(WARNING). And then\nthis patch could just make use of it?\n\n+ /*\n+ * There's corruption, but since this affects PageIsNew, we\n+ * can't compute a checksum, so set NoComputedChecksum for the\n+ * expected checksum.\n+ */\n+ *chk_expected = NoComputedChecksum;\n+ *chk_found = hdr->pd_checksum;\n+ return false;\n[...]\n+ /*\n+ * This can happen if corruption makes the block appears as\n+ * PageIsNew() but isn't a new page.\n+ */\n+ if (chk_expected == NoComputedChecksum)\n+ nulls[i++] = true;\n+ else\n+ values[i++] = UInt16GetDatum(chk_expected);\nSomewhat related to the first point, NoComputedChecksum exists\nbecause, as the current patch is shaped, we need to report an existing\nchecksum to the user even for the zero-only case. PageIsVerified() is\nnot that flexible so we could change it to report a status depending\non the error faced (checksum, header or zero-only) on top of getting a\nchecksum. Now, I am not completely sure either that it is worth the\ncomplication to return in the SRF of the check function the expected\nchecksum. So, wouldn't it be better to just rely on PageIsVerified()\n(well it's rather-soon-to-be extended flavor) for the checksum check,\nthe header sanity check and the zero-only check? My point is to keep\na single entry point for all the page sanity checks, so as base\nbackups, your patch, and the buffer manager apply the same things.\nBase backups got that partially wrong because the base backup code\nwants to keep control of the number of failures and the error\nreports. Your patch actually wishes to report a failure, but you want\nto add more context with the fork name and such. Another option we\ncould use here is to add an error context so as PageIsVerified()\nreports the WARNING, but the SQL function provides more context with\nthe block number and the relation involved in the check.\n--\nMichael",
"msg_date": "Mon, 19 Oct 2020 11:39:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Oct 19, 2020 at 10:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 16, 2020 at 09:22:02AM +0800, Julien Rouhaud wrote:\n> > And Michael just told me that I also missed adding one of the C files\n> > while splitting the patch into two.\n>\n> + if (PageIsNew(page))\n> + {\n> + /*\n> + * Check if the page is really new or if there's corruption that\n> + * affected PageIsNew detection. Note that PageIsVerified won't try to\n> + * detect checksum corruption in this case, so there's no risk of\n> + * duplicated corruption report.\n> + */\n> + if (PageIsVerified(page, blkno))\n> + {\n> + /* No corruption. */\n> + return true;\n> + }\n> Please note that this part of your patch overlaps with a proposal for\n> a bug fix related to zero-only pages with the checksum verification of\n> base backups:\n> https://www.postgresql.org/message-id/608f3476-0598-2514-2c03-e05c7d2b0cbd@postgrespro.ru\n>\n> Your patch is trying to adapt itself to the existing logic we have in\n> PageIsVerified() so as you don't get a duplicated report, as does the\n> base backup part. Note that I cannot find in the wild any open code\n> making use of PageIsVerified(), but I'd like to believe that it is\n> rather handy to use for table AMs at least (?), so if we can avoid any\n> useless ABI breakage, it may be better to have a new\n> PageIsVerifiedExtended() able to take additional options, one to\n> report to pgstat and one to generate this elog(WARNING). And then\n> this patch could just make use of it?\n\nIndeed, that would be great.\n\n> + /*\n> + * There's corruption, but since this affects PageIsNew, we\n> + * can't compute a checksum, so set NoComputedChecksum for the\n> + * expected checksum.\n> + */\n> + *chk_expected = NoComputedChecksum;\n> + *chk_found = hdr->pd_checksum;\n> + return false;\n> [...]\n> + /*\n> + * This can happen if corruption makes the block appears as\n> + * PageIsNew() but isn't a new page.\n> + */\n> + if (chk_expected == NoComputedChecksum)\n> + nulls[i++] = true;\n> + else\n> + values[i++] = UInt16GetDatum(chk_expected);\n> Somewhat related to the first point, NoComputedChecksum exists\n> because, as the current patch is shaped, we need to report an existing\n> checksum to the user even for the zero-only case.\n\n\nI'm not sure that I understand your point. The current patch only\nreturns something to users when there's a corruption. If by\n\"zero-only case\" you mean \"page corrupted in a way that PageIsNew()\nreturns true while not being all zero\", then it's a corrupted page and\nthen obviously yes it needs to be returned to users.\n\n> PageIsVerified() is\n> not that flexible so we could change it to report a status depending\n> on the error faced (checksum, header or zero-only) on top of getting a\n> checksum. Now, I am not completely sure either that it is worth the\n> complication to return in the SRF of the check function the expected\n> checksum.\n\nIt seemed to me that it could be something useful to get with this\nkind of tool. You may be able to recover a corrupted page from\nbackup/WAL if the checksum itself wasn't corrupted so that you know\nwhat to look for. There would be a lot of caveats and low level work,\nbut if you're desperate enough that may save you a bit of time.\n\n> So, wouldn't it be better to just rely on PageIsVerified()\n> (well it's rather-soon-to-be extended flavor) for the checksum check,\n> the header sanity check and the zero-only check? My point is to keep\n> a single entry point for all the page sanity checks, so as base\n> backups, your patch, and the buffer manager apply the same things.\n> Base backups got that partially wrong because the base backup code\n> wants to keep control of the number of failures and the error\n> reports.\n\nI'm fine with it.\n\n> Your patch actually wishes to report a failure, but you want\n> to add more context with the fork name and such. Another option we\n> could use here is to add an error context so as PageIsVerified()\n> reports the WARNING, but the SQL function provides more context with\n> the block number and the relation involved in the check.\n\nAlso, returning actual data rather than a bunch of warnings is way\neasier to process for client code. And as mentioned previously having\nan API that returns a list of corrupted blocks could be useful for a\nsingle-page recovery feature.\n\n\n",
"msg_date": "Mon, 19 Oct 2020 11:16:38 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Oct 19, 2020 at 11:16:38AM +0800, Julien Rouhaud wrote:\n> On Mon, Oct 19, 2020 at 10:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Somewhat related to the first point, NoComputedChecksum exists\n>> because, as the current patch is shaped, we need to report an existing\n>> checksum to the user even for the zero-only case.\n> \n> I'm not sure that I understand your point. The current patch only\n> returns something to users when there's a corruption. If by\n> \"zero-only case\" you mean \"page corrupted in a way that PageIsNew()\n> returns true while not being all zero\", then it's a corrupted page and\n> then obviously yes it needs to be returned to users.\n\nSorry for the confusion, this previous paragraph was confusing. I\nmeant that the reason why NoComputedChecksum exists is that we give up\non attempting to calculate the checksum if we detect that the page is\nnew, but failed the zero-only test, and that we want the users to know\nabout this special case by setting this expected checksum to NULL for\nthe SRF.\n\n>> So, wouldn't it be better to just rely on PageIsVerified()\n>> (well it's rather-soon-to-be extended flavor) for the checksum check,\n>> the header sanity check and the zero-only check? My point is to keep\n>> a single entry point for all the page sanity checks, so as base\n>> backups, your patch, and the buffer manager apply the same things.\n>> Base backups got that partially wrong because the base backup code\n>> wants to keep control of the number of failures and the error\n>> reports.\n> \n> I'm fine with it.\n\nThanks.\n\n>> Your patch actually wishes to report a failure, but you want\n>> to add more context with the fork name and such. Another option we\n>> could use here is to add an error context so as PageIsVerified()\n>> reports the WARNING, but the SQL function provides more context with\n>> the block number and the relation involved in the check.\n> \n> Also, returning actual data rather than a bunch of warnings is way\n> easier to process for client code. And as mentioned previously having\n> an API that returns a list of corrupted blocks could be useful for a\n> single-page recovery feature.\n\nNo issues with reporting the block number and the fork type in the SRF\nat least of course as this is helpful to detect the position of the\nbroken blocks. For the checksum found in the header and the one\ncalculated (named expected in the patch), I am less sure how to put a\nclear definition on top of that but we could always consider that\nlater and extend the SRF as needed. Once the user knows that both do\nnot match, he/she knows that there is a problem.\n--\nMichael",
"msg_date": "Mon, 19 Oct 2020 16:52:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Oct 19, 2020 at 04:52:48PM +0900, Michael Paquier wrote:\n> No issues with reporting the block number and the fork type in the SRF\n> at least of course as this is helpful to detect the position of the\n> broken blocks. For the checksum found in the header and the one\n> calculated (named expected in the patch), I am less sure how to put a\n> clear definition on top of that but we could always consider that\n> later and extend the SRF as needed. Once the user knows that both do\n> not match, he/she knows that there is a problem.\n\nSo, I have reviewed your patch set, and heavily reworked the logic to\nbe more consistent on many thinks, resulting in a largely simplified\npatch without sacrificing its usefulness:\n- The logic of the core routine of bufmgr.c is unchanged. I have\nsimplified it a bit though by merging the subroutines that were part\nof the patch. SMgrRelation is used as argument instead of a\nRelation. That's more consistent with the surroundings. The initial\nread of a page without locks is still on the table as an extra\noptimization, though I am not completely sure if this should be part\nof CheckBuffer() or not. I also thought about the previous benchmarks\nand I think that not using the most-optimized improved performance,\nbecause it reduced the successive runes of the SQL functions, reducing\nthe back-pressure on the partition locks (we held on of them at the\nsame time for a longer period, meaning that the other 127 ran free for\na longer time). Please note that this part still referred to a\n\"module\", which was incorrect.\n- Removal of the expected and found checksums from the SRF of the\nfunction. Based on my recent business with the page checks for base\nbackups, I have arrived at the conclusion that the SRF should return\ndata that we can absolutely trust, and the minimum I think we have to\ntrust here is if a given page is thought as safe or not, considering\nall the sanity checks done by PageIsVerified() as the main entry\npoint for everything. This has led to a bit of confusion with the\naddition of NoComputedChecksum for a page that was empty as of the\ninitial of the patch, so it happens that we don't need it anymore.\n- Removal of the dependency with checksums for this feature. While\nsimplifying the code, I have noticed that this feature can also be\nbeneficial for clusters that do not have have data checksums, as\nPageIsVerified() is perfectly able to run some page header checks and\nthe zero-page case. That's of course less useful than having the\nchecksums, but there is no need to add a dependency here. The file\nfor the SQL functions is renamed from checksumfuncs.c to pagefuncs.c.\n- The function is changed to return no tuples if the relkind is not\nsupported, and the same applies for temporary relations. That's more\nconsistent with other system functions like the ones in charge of\npartition information, and this makes full scans of pg_class much\neasier to work with. Temporary tables were not handled correctly\nanyway as these are in local buffers, but the use case of this\nfunction in this case is not really obvious to me.\n- Having the forknum in the SRF is kind of confusing, as the user\nwould need to map a number with the physical on-disk name. Instead, I\nhave made the choice to return the *path* of the corrupted file with a\nblock number. This way, an operator can know immediately where a\nproblem comes from. The patch does not append the segment number, and\nI am not sure if we should actually do that, but adding it is\nstraight-forward as we have the block number. There is a dependency\nwith table AMs here as well, as this goes down to fd.c, explaining why\nI have not added it and just.\n- I really don't know what kind of default ACL should apply for such\nfunctions, but I am sure that SCAN_TABLES is not what we are looking\nfor here, and there is nothing preventing us from having a safe\ndefault from the start of times, so I moved the function to be\nsuperuser-only by default, and GRANT can be used to allow its\nexecution to other roles. We could relax that in the future, of\ncourse, this can be discussed separately.\n- The WARNING report for each block found as corrupted gains an error\ncontext, as a result of a switch to PageIsVerified(), giving a user\nall the information needed in the logs on top of the result in the\nSRF. That's useful as well if combined with CHECK_FOR_INTERRUPTS(),\nand I got to wonder if we should have some progress report for this\nstuff, though that's a separate discussion.\n- The function is renamed to something less generic,\npg_relation_check_pages(), and I have reduced the number of functions\nfrom two to one, where the user can specify the fork name with a new\noption. The default of NULL means that all the forks of a relation\nare checked.\n- The TAP tests are rather bulky. I have moved all the non-corruption\ntest cases into a new SQL test file. That's useful for people willing\nto do some basic sanity checks with a non-default table AM. At least\nit provides a minimum coverage. I have not completely finished its\nreview, but I have done some work. Doing some debugging of\ncorrupt_and_test_block() was proving to be rather difficult as the\nsame test names are assigned multiple times. I am tempted to move\nthis test suite to src/test/recovery/ instead.\n- Reworked the docs and some comments.\n\nThat's quite a lot of changes, and I think that most of the C code,\nthe main tests in src/test/regress/ and the docs are getting in a\nrather committable state. The TAP tests emulating corruptions still\nneed a closer lookup (say, check_pg_stat_database_nb_error() should\nhave an error prefix at least). The portions in bufmgr.c and the rest\nshould of course be split into two separate commits, that can easily\nbe done. And the code needs an indentation run and a catalog bump.\n--\nMichael",
"msg_date": "Fri, 23 Oct 2020 16:28:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Oct 23, 2020 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 19, 2020 at 04:52:48PM +0900, Michael Paquier wrote:\n> > No issues with reporting the block number and the fork type in the SRF\n> > at least of course as this is helpful to detect the position of the\n> > broken blocks. For the checksum found in the header and the one\n> > calculated (named expected in the patch), I am less sure how to put a\n> > clear definition on top of that but we could always consider that\n> > later and extend the SRF as needed. Once the user knows that both do\n> > not match, he/she knows that there is a problem.\n>\n> So, I have reviewed your patch set, and heavily reworked the logic to\n> be more consistent on many thinks, resulting in a largely simplified\n> patch without sacrificing its usefulness:\n\nThanks!\n\n> - Removal of the dependency with checksums for this feature. While\n> simplifying the code, I have noticed that this feature can also be\n> beneficial for clusters that do not have have data checksums, as\n> PageIsVerified() is perfectly able to run some page header checks and\n> the zero-page case. That's of course less useful than having the\n> checksums, but there is no need to add a dependency here. The file\n> for the SQL functions is renamed from checksumfuncs.c to pagefuncs.c.\n\nI agree. However I'm assuming that this refactor is relying on the\nnot yet committed patch (in the nearby thread dealing with base backup\nchecksums check) to also refactor PageIsVerified? As all the logic\nyou removed was done to avoid spamming a lot of warnings when calling\nthe function.\n\n> - The function is changed to return no tuples if the relkind is not\n> supported, and the same applies for temporary relations. That's more\n> consistent with other system functions like the ones in charge of\n> partition information, and this makes full scans of pg_class much\n> easier to work with. Temporary tables were not handled correctly\n> anyway as these are in local buffers, but the use case of this\n> function in this case is not really obvious to me.\n\nAgreed\n\n> - Having the forknum in the SRF is kind of confusing, as the user\n> would need to map a number with the physical on-disk name. Instead, I\n> have made the choice to return the *path* of the corrupted file with a\n> block number. This way, an operator can know immediately where a\n> problem comes from. The patch does not append the segment number, and\n> I am not sure if we should actually do that, but adding it is\n> straight-forward as we have the block number. There is a dependency\n> with table AMs here as well, as this goes down to fd.c, explaining why\n> I have not added it and just.\n\nThat's a clear improvement, thanks!\n\n> - I really don't know what kind of default ACL should apply for such\n> functions, but I am sure that SCAN_TABLES is not what we are looking\n> for here, and there is nothing preventing us from having a safe\n> default from the start of times, so I moved the function to be\n> superuser-only by default, and GRANT can be used to allow its\n> execution to other roles. We could relax that in the future, of\n> course, this can be discussed separately.\n\nI don't have a strong opinion here, SCAN_TABLES was maybe not ideal.\nNo objections.\n\n> - The WARNING report for each block found as corrupted gains an error\n> context, as a result of a switch to PageIsVerified(), giving a user\n> all the information needed in the logs on top of the result in the\n> SRF. That's useful as well if combined with CHECK_FOR_INTERRUPTS(),\n> and I got to wonder if we should have some progress report for this\n> stuff, though that's a separate discussion.\n\nMmm, is it really an improvement to report warnings during this\nfunction execution? Note also that PageIsVerified as-is won't report\na warning if a page is found as PageIsNew() but isn't actually all\nzero, while still being reported as corrupted by the SRF.\n\nHave you also considered that it's possible to execute\npg_relation_check_pages with ignore_checksum_failure = on? That's\nevidently a bad idea, but doing so would report some of the data\ncorruption as warnings while still not reporting anything in the SRF.\n\nHaving some progress report would be nice to have, but +1 to have a\nseparate discussion for that.\n\n> - The function is renamed to something less generic,\n> pg_relation_check_pages(), and I have reduced the number of functions\n> from two to one, where the user can specify the fork name with a new\n> option. The default of NULL means that all the forks of a relation\n> are checked.\n\nOk.\n\n> - The TAP tests are rather bulky. I have moved all the non-corruption\n> test cases into a new SQL test file. That's useful for people willing\n> to do some basic sanity checks with a non-default table AM. At least\n> it provides a minimum coverage.\n\nAgreed\n\n\n",
"msg_date": "Fri, 23 Oct 2020 16:31:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Oct 23, 2020 at 04:31:56PM +0800, Julien Rouhaud wrote:\n> I agree. However I'm assuming that this refactor is relying on the\n> not yet committed patch (in the nearby thread dealing with base backup\n> checksums check) to also refactor PageIsVerified? As all the logic\n> you removed was done to avoid spamming a lot of warnings when calling\n> the function.\n\nYeah, it should use a refactored version, but I was as well in the\nmood of looking at version based on what we have now on HEAD. Even if\nI am not completely clear where the patch for page verification and\nbase backups will go, I was thinking as well to do the refactoring\nintroducing PageIsVerifiedExtended() first, before considering the\nnext steps for this thread. It seems to me that the path where we\ngenerate no WARNINGs at all makes the whole experience more consistent\nfor the user with this function.\n\n> Mmm, is it really an improvement to report warnings during this\n> function execution? Note also that PageIsVerified as-is won't report\n> a warning if a page is found as PageIsNew() but isn't actually all\n> zero, while still being reported as corrupted by the SRF.\n\nYep, joining the point of above to just have no WARNINGs at all.\n\n> Have you also considered that it's possible to execute\n> pg_relation_check_pages with ignore_checksum_failure = on? That's\n> evidently a bad idea, but doing so would report some of the data\n> corruption as warnings while still not reporting anything in the SRF.\n\nYeah, I thought about that as well, but I did not see a strong\nargument against preventing this behavior either, even if it sounds\na bit strange. We could always tune that later depending on the\nfeedback.\n--\nMichael",
"msg_date": "Fri, 23 Oct 2020 18:06:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Oct 23, 2020 at 06:06:30PM +0900, Michael Paquier wrote:\n> On Fri, Oct 23, 2020 at 04:31:56PM +0800, Julien Rouhaud wrote:\n>> Mmm, is it really an improvement to report warnings during this\n>> function execution? Note also that PageIsVerified as-is won't report\n>> a warning if a page is found as PageIsNew() but isn't actually all\n>> zero, while still being reported as corrupted by the SRF.\n> \n> Yep, joining the point of above to just have no WARNINGs at all.\n\nNow that we have d401c57, I got to consider more this one, and opted\nfor not generating a WARNING for now. Hence, PageisVerifiedExtended()\nis disabled regarding that, but we still report a checksum failure in\nit.\n\nI have spent some time reviewing the tests, and as I felt this was\nbulky. In the reworked version attached, I have reduced the number of\ntests by half, without reducing the coverage, mainly:\n- Removed all the stderr and the return code tests, as we always\nexpected the commands to succeed, and safe_psql() can do the job\nalready.\n- Merged of the queries using pg_relation_check_pages into a single\nroutine, with the expected output (set of broken pages returned in the\nSRF) in the arguments.\n- Added some prefixes to the tests, to generate unique test names.\nThat makes debug easier.\n- The query on pg_stat_database is run once at the beginning, once at\nthe end with the number of checksum failures correctly updated.\n- Added comments to document all the routines, and renamed some of\nthem mostly for consistency.\n- Skipped system relations from the scan of pg_class, making the test\nmore costly for nothing.\n- I ran some tests on Windows, just-in-case.\n\nI have also added a SearchSysCacheExists1() to double-check if the\nrelation is missing before opening it, added a\nCHECK_FOR_INTERRUPTS() within the main check loop (where the error\ncontext is really helpful), indented the code, bumped the catalogs\n(mostly a self-reminder), etc.\n\nSo, what do you think?\n--\nMichael",
"msg_date": "Tue, 27 Oct 2020 16:07:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 3:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 23, 2020 at 06:06:30PM +0900, Michael Paquier wrote:\n> > On Fri, Oct 23, 2020 at 04:31:56PM +0800, Julien Rouhaud wrote:\n> >> Mmm, is it really an improvement to report warnings during this\n> >> function execution? Note also that PageIsVerified as-is won't report\n> >> a warning if a page is found as PageIsNew() but isn't actually all\n> >> zero, while still being reported as corrupted by the SRF.\n> >\n> > Yep, joining the point of above to just have no WARNINGs at all.\n>\n> Now that we have d401c57, I got to consider more this one, and opted\n> for not generating a WARNING for now. Hence, PageisVerifiedExtended()\n> is disabled regarding that, but we still report a checksum failure in\n> it.\n\nGreat, that's also what I had in mind.\n\n> I have spent some time reviewing the tests, and as I felt this was\n> bulky. In the reworked version attached, I have reduced the number of\n> tests by half, without reducing the coverage, mainly:\n> - Removed all the stderr and the return code tests, as we always\n> expected the commands to succeed, and safe_psql() can do the job\n> already.\n> - Merged of the queries using pg_relation_check_pages into a single\n> routine, with the expected output (set of broken pages returned in the\n> SRF) in the arguments.\n> - Added some prefixes to the tests, to generate unique test names.\n> That makes debug easier.\n> - The query on pg_stat_database is run once at the beginning, once at\n> the end with the number of checksum failures correctly updated.\n> - Added comments to document all the routines, and renamed some of\n> them mostly for consistency.\n> - Skipped system relations from the scan of pg_class, making the test\n> more costly for nothing.\n> - I ran some tests on Windows, just-in-case.\n>\n> I have also added a SearchSysCacheExists1() to double-check if the\n> relation is missing before opening it, added a\n> CHECK_FOR_INTERRUPTS() within the main check loop (where the error\n> context is really helpful), indented the code, bumped the catalogs\n> (mostly a self-reminder), etc.\n>\n> So, what do you think?\n\nI think it's also worth noting that the IOLock is now acquired just\nbefore getting the buffer state, and released after the read (or after\nfinding that the buffer is dirty). This is consistent with how it's\ndone elsewhere, so I'm fine.\n\nOther than that I'm quite happy with the changes you made, thanks a lot!\n\n\n",
"msg_date": "Tue, 27 Oct 2020 19:47:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 07:47:19PM +0800, Julien Rouhaud wrote:\n> I think it's also worth noting that the IOLock is now acquired just\n> before getting the buffer state, and released after the read (or after\n> finding that the buffer is dirty). This is consistent with how it's\n> done elsewhere, so I'm fine.\n\nConsistency is the point. This API should be safe to use by design.\nI have done some extra performance tests similar to what I did\nupthread, and this version showed similar numbers.\n\n> Other than that I'm quite happy with the changes you made, thanks a lot!\n\nThanks for confirming. I have gone through the whole set today,\nsplitted the thing into two commits and applied them. We had\nbuildfarm member florican complain about a mistake in one of the\nGetDatum() calls that I took care of already, and there is nothing\nelse on my radar.\n--\nMichael",
"msg_date": "Wed, 28 Oct 2020 14:08:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nI have tested this great feature in the latest commit environment on Red Hat Enterprise Linux 7.8. I modified a few blocks in a relation file to raise a checksum error. When I executed the pg_relation_check_pages function, the backend terminated abnormally. The attached file is the operation log.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Michael Paquier [mailto:michael@paquier.xyz] \nSent: Wednesday, October 28, 2020 2:09 PM\nTo: Julien Rouhaud <rjuju123@gmail.com>\nCc: Justin Pryzby <pryzby@telsasoft.com>; Masahiko Sawada <masahiko.sawada@2ndquadrant.com>; Robert Haas <robertmhaas@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Masahiko Sawada <sawada.mshk@gmail.com>\nSubject: Re: Online checksums verification in the backend\n\nOn Tue, Oct 27, 2020 at 07:47:19PM +0800, Julien Rouhaud wrote:\n> I think it's also worth noting that the IOLock is now acquired just \n> before getting the buffer state, and released after the read (or after \n> finding that the buffer is dirty). This is consistent with how it's \n> done elsewhere, so I'm fine.\n\nConsistency is the point. This API should be safe to use by design.\nI have done some extra performance tests similar to what I did upthread, and this version showed similar numbers.\n\n> Other than that I'm quite happy with the changes you made, thanks a lot!\n\nThanks for confirming. I have gone through the whole set today, splitted the thing into two commits and applied them. We had buildfarm member florican complain about a mistake in one of the\nGetDatum() calls that I took care of already, and there is nothing else on my radar.\n--\nMichael",
"msg_date": "Wed, 28 Oct 2020 23:50:35 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: Online checksums verification in the backend"
},
{
"msg_contents": "Hello,\n\nOn Thu, Oct 29, 2020 at 7:52 AM Shinoda, Noriyoshi (PN Japan A&PS\nDelivery) <noriyoshi.shinoda@hpe.com> wrote:\n>\n> Hi,\n>\n> I have tested this great feature in the latest commit environment on Red Hat Enterprise Linux 7.8. I modified a few blocks in a relation file to raise a checksum error. When I executed the pg_relation_check_pages function, the backend terminated abnormally. The attached file is the operation log.\n\nThanks for the report!\n\nAs far as I can see the issue is that the pfree(path) in\ncheck_relation_fork() should be outside the for loop.\n\n\n",
"msg_date": "Thu, 29 Oct 2020 08:12:42 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 08:12:42AM +0800, Julien Rouhaud wrote:\n> As far as I can see the issue is that the pfree(path) in\n> check_relation_fork() should be outside the for loop.\n\nYes, this would be triggered if more than one page is found as broken\nin a single SRF. Fixed, thanks Shinoda-san.\n--\nMichael",
"msg_date": "Thu, 29 Oct 2020 09:39:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-28 14:08:52 +0900, Michael Paquier wrote:\n> Thanks for confirming. I have gone through the whole set today,\n> splitted the thing into two commits and applied them. We had\n> buildfarm member florican complain about a mistake in one of the\n> GetDatum() calls that I took care of already, and there is nothing\n> else on my radar.\n\nThe code does IO while holding the buffer mapping lock. That seems\n*entirely* unacceptable to me. That basically locks 1/128 of shared\nbuffers against concurrent mapping changes, while reading data that is\nlikely not to be on disk? Seriously?\n\n\n\t/* see if the block is in the buffer pool or not */\n\tLWLockAcquire(partLock, LW_SHARED);\n\tbuf_id = BufTableLookup(&buf_tag, buf_hash);\n\tif (buf_id >= 0)\n\t{\n\t\tuint32\t\tbuf_state;\n\n\t\t/*\n\t\t * Found it. Now, retrieve its state to know what to do with it, and\n\t\t * release the pin immediately. We do so to limit overhead as much as\n\t\t * possible. We keep the shared LWLock on the target buffer mapping\n\t\t * partition for now, so this buffer cannot be evicted, and we acquire\n\t\t * an I/O Lock on the buffer as we may need to read its contents from\n\t\t * disk.\n\t\t */\n\t\tbufdesc = GetBufferDescriptor(buf_id);\n\n\t\tLWLockAcquire(BufferDescriptorGetIOLock(bufdesc), LW_SHARED);\n\t\tbuf_state = LockBufHdr(bufdesc);\n\t\tUnlockBufHdr(bufdesc, buf_state);\n\n\t\t/* If the page is dirty or invalid, skip it */\n\t\tif ((buf_state & BM_DIRTY) != 0 || (buf_state & BM_TAG_VALID) == 0)\n\t\t{\n\t\t\tLWLockRelease(BufferDescriptorGetIOLock(bufdesc));\n\t\t\tLWLockRelease(partLock);\n\t\t\treturn true;\n\t\t}\n\n\t\t/* Read the buffer from disk, with the I/O lock still held */\n\t\tsmgrread(smgr, forknum, blkno, buffer);\n\t\tLWLockRelease(BufferDescriptorGetIOLock(bufdesc));\n\t}\n\telse\n\t{\n\t\t/*\n\t\t * Simply read the buffer. There's no risk of modification on it as\n\t\t * we are holding the buffer pool partition mapping lock.\n\t\t */\n\t\tsmgrread(smgr, forknum, blkno, buffer);\n\t}\n\n\nThe justification in the in-shared-buffers case seems to completely\nmis-judge costs too:\n\t\t * Found it. Now, retrieve its state to know what to do with it, and\n\t\t * release the pin immediately. We do so to limit overhead as much as\n\t\t * possible. We keep the shared LWLock on the target buffer mapping\n\t\t * partition for now, so this buffer cannot be evicted, and we acquire\n\t\t * an I/O Lock on the buffer as we may need to read its contents from\n\t\t * disk.\na pin is cheap. Holding the partition lock is not.\n\n\nAlso, using char[BLCKSZ] as a buffer isn't ok. This should use\nPGAlignedBlock:\n/*\n * Use this, not \"char buf[BLCKSZ]\", to declare a field or local variable\n * holding a page buffer, if that page might be accessed as a page and not\n * just a string of bytes. Otherwise the variable might be under-aligned,\n * causing problems on alignment-picky hardware. (In some places, we use\n * this to declare buffers even though we only pass them to read() and\n * write(), because copying to/from aligned buffers is usually faster than\n * using unaligned buffers.) We include both \"double\" and \"int64\" in the\n * union to ensure that the compiler knows the value must be MAXALIGN'ed\n * (cf. configure's computation of MAXIMUM_ALIGNOF).\n */\ntypedef union PGAlignedBlock\n\n\nI think this needs to be quickly reworked or reverted.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Oct 2020 11:17:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-29 11:17:29 -0700, Andres Freund wrote:\n> \t\tLWLockAcquire(BufferDescriptorGetIOLock(bufdesc), LW_SHARED);\n> \t\tbuf_state = LockBufHdr(bufdesc);\n> \t\tUnlockBufHdr(bufdesc, buf_state);\n> \n> \t\t/* If the page is dirty or invalid, skip it */\n> \t\tif ((buf_state & BM_DIRTY) != 0 || (buf_state & BM_TAG_VALID) == 0)\n\nThis is weird as well. What is this supposed to do? Just locking and\nunlocking a buffer header doesn't do squat? There's no guarantee that\nthe flags haven't changed by this point, so you could just as well not\nacquire the buffer header lock.\n\nAlso, why are pages without a valid tag ignored? I can follow the\nargument for skipping it in the DIRTY case, but that doesn't apply for\nBM_TAG_VALID?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Oct 2020 11:31:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-29 11:17:29 -0700, Andres Freund wrote:\n> The code does IO while holding the buffer mapping lock. That seems\n> *entirely* unacceptable to me. That basically locks 1/128 of shared\n> buffers against concurrent mapping changes, while reading data that is\n> likely not to be on disk? Seriously?\n\nAlso, uh, I don't think the locking of the buffer table provides you\nwith the full guarantees CheckBuffer() seems to assume:\n\n * Check the state of a buffer without loading it into the shared buffers. To\n * avoid torn pages and possible false positives when reading data, a shared\n * LWLock is taken on the target buffer pool partition mapping, and we check\n * if the page is in shared buffers or not. An I/O lock is taken on the block\n * to prevent any concurrent activity from happening.\n\nthis doesn't actually prevent all concurrent write IO, unless you hold\nan appropriate lock on the relation. There's a few places that use\nsmgrwrite()/smgrextend() to write out data bypassing shared buffers.\n\nMaybe that isn't a problem for the uses of CheckBuffer() is envisioned\nfor, but that'd need a pretty detailed explanation as to when it's safe\nto use CheckBuffer() for which blocks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Oct 2020 12:10:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nOn Fri, Oct 30, 2020 at 2:17 AM Andres Freund <andres@anarazel.de> wrote:\n> The code does IO while holding the buffer mapping lock. That seems\n> *entirely* unacceptable to me. That basically locks 1/128 of shared\n> buffers against concurrent mapping changes, while reading data that is\n> likely not to be on disk? Seriously?\n\nThe initial implementation had a different approach, reading the buffer once\nwithout holding the buffer mapping lock (which could lead to some false\npositive in some unlikely scenario), and only if a corruption is detected the\nread is done once again *while holding the buffer mapping lock* to ensure it's\nnot a false positive. Some benchmarking showed that the performance was worse,\nso we dropped that optimisation. Should we go back to something like that or\ndo you have a better way to ensure a consistent read of a buffer which isn't in\nshared buffers?\n\n> a pin is cheap. Holding the partition lock is not.\n\n> The justification in the in-shared-buffers case seems to completely\n> mis-judge costs too:\n> * Found it. Now, retrieve its state to know what to do with it, and\n> * release the pin immediately. We do so to limit overhead as much as\n> * possible. We keep the shared LWLock on the target buffer mapping\n> * partition for now, so this buffer cannot be evicted, and we acquire\n> * an I/O Lock on the buffer as we may need to read its contents from\n> * disk.\n> a pin is cheap. Holding the partition lock is not.\n\nI clearly did a poor job in that case. Will fix.\n\n> Also, using char[BLCKSZ] as a buffer isn't ok. This should use\n> PGAlignedBlock:\n\nI wasn't aware of it, I will fix.\n\n> > LWLockAcquire(BufferDescriptorGetIOLock(bufdesc), LW_SHARED);\n> > buf_state = LockBufHdr(bufdesc);\n> > UnlockBufHdr(bufdesc, buf_state);\n> >\n> > /* If the page is dirty or invalid, skip it */\n> > if ((buf_state & BM_DIRTY) != 0 || (buf_state & BM_TAG_VALID) == 0)\n>\n> This is weird as well. What is this supposed to do? Just locking and\n> unlocking a buffer header doesn't do squat? There's no guarantee that\n> the flags haven't changed by this point, so you could just as well not\n> acquire the buffer header lock.\n\nThis is using the same approach as e.g. WaitIO() to get the state. I agree\nthat the state can change after the buffer header lock has been released, but\nI think that's something out of scope. The only guarantee that we can give is\nthat the database (or subset of relations checked) was healthy at the time the\ncheck was started, provided that your cluster survive the checkpoint happening\nafter the check ended. I don't see how we can do better than that.\n\n> Also, why are pages without a valid tag ignored? I can follow the\n> argument for skipping it in the DIRTY case, but that doesn't apply for\n> BM_TAG_VALID?\n\nAFAICT pages that aren't BM_TAG_VALID are pages newly allocated.\nThose shouldn't\nbe entirely initialized yet, and they'll be eventually written and flushed.\n\n> Also, uh, I don't think the locking of the buffer table provides you\n> with the full guarantees CheckBuffer() seems to assume:\n>\n> * Check the state of a buffer without loading it into the shared buffers. To\n> * avoid torn pages and possible false positives when reading data, a shared\n> * LWLock is taken on the target buffer pool partition mapping, and we check\n> * if the page is in shared buffers or not. An I/O lock is taken on the block\n> * to prevent any concurrent activity from happening.\n>\n> this doesn't actually prevent all concurrent write IO, unless you hold\n> an appropriate lock on the relation. There's a few places that use\n> smgrwrite()/smgrextend() to write out data bypassing shared buffers.\n>\n> Maybe that isn't a problem for the uses of CheckBuffer() is envisioned\n> for, but that'd need a pretty detailed explanation as to when it's safe\n> to use CheckBuffer() for which blocks.\n\nAFAICT, concurrent smgrwrite() can only happen for init forks of unlogged\nrelation, during creation. Those relations shouldn't be visible to the caller\nsnapshot, so it should be safe. I can add a comment for that if I'm not\nmistaken.\n\nFor concurrent smgrextend(), we read the relation size at the beginning of the\nfunction, so we shouldn't read newly allocated blocks. But you're right that\nit's still possible to get the size that includes a newly allocated block\nthat can be concurrently written. We can avoid that be holding a\nLOCKTAG_RELATION_EXTEND lock when reading the relation size. Would that be ok?\n\n\n",
"msg_date": "Fri, 30 Oct 2020 10:01:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-30 10:01:08 +0800, Julien Rouhaud wrote:\n> On Fri, Oct 30, 2020 at 2:17 AM Andres Freund <andres@anarazel.de> wrote:\n> > The code does IO while holding the buffer mapping lock. That seems\n> > *entirely* unacceptable to me. That basically locks 1/128 of shared\n> > buffers against concurrent mapping changes, while reading data that is\n> > likely not to be on disk? Seriously?\n>\n> The initial implementation had a different approach, reading the buffer once\n> without holding the buffer mapping lock (which could lead to some false\n> positive in some unlikely scenario), and only if a corruption is detected the\n> read is done once again *while holding the buffer mapping lock* to ensure it's\n> not a false positive. Some benchmarking showed that the performance was worse,\n> so we dropped that optimisation. Should we go back to something like that or\n> do you have a better way to ensure a consistent read of a buffer which isn't in\n> shared buffers?\n\nI suspect that you're gonna need something quite different than what the\nfunction is doing right now. Not because such a method will be faster in\nisolation, but because there's a chance to have it correct and not have\na significant performance impact onto the rest of the system.\n\nI've not thought about it in detail yet. Is suspect you'll need to\nensure there is a valid entry in the buffer mapping table for the buffer\nyou're processing. By virtue of setting BM_IO_IN_PROGRESS on that entry\nyou're going to prevent concurrent IO from starting until your part is\ndone.\n\n\n> > Also, why are pages without a valid tag ignored? I can follow the\n> > argument for skipping it in the DIRTY case, but that doesn't apply for\n> > BM_TAG_VALID?\n>\n> AFAICT pages that aren't BM_TAG_VALID are pages newly allocated.\n> Those shouldn't\n> be entirely initialized yet, and they'll be eventually written and flushed.\n\nWhen a page is being read there's a period when the buffer is without\nBM_TAG_VALID. It's quite possible that the locking prevents this case\nfrom being reachable - but in that case you shouldn't just accept it as\nsomething to be skipped...\n\n\n> > Also, uh, I don't think the locking of the buffer table provides you\n> > with the full guarantees CheckBuffer() seems to assume:\n> >\n> > * Check the state of a buffer without loading it into the shared buffers. To\n> > * avoid torn pages and possible false positives when reading data, a shared\n> > * LWLock is taken on the target buffer pool partition mapping, and we check\n> > * if the page is in shared buffers or not. An I/O lock is taken on the block\n> > * to prevent any concurrent activity from happening.\n> >\n> > this doesn't actually prevent all concurrent write IO, unless you hold\n> > an appropriate lock on the relation. There's a few places that use\n> > smgrwrite()/smgrextend() to write out data bypassing shared buffers.\n> >\n> > Maybe that isn't a problem for the uses of CheckBuffer() is envisioned\n> > for, but that'd need a pretty detailed explanation as to when it's safe\n> > to use CheckBuffer() for which blocks.\n>\n> AFAICT, concurrent smgrwrite() can only happen for init forks of unlogged\n> relation, during creation.\n\nThat may be the case right in core right now, but for one, there\ndefinitely are extensions going through smgrwrite() without using the\nbuffer pool. Essentially, what you are saying is that the introduction\nof CheckBuffer() altered what smgrwrite() is allowed to be used for,\nwithout having discussed or documented that.\n\nBefore this an AM/extension could just use smgrwrite() to write data not\nin shared buffers, as long as a locking scheme is used that prevents\nmultiple backends from doing that at the same time (trivially:\nAccessExclusiveLock).\n\n\n> Those relations shouldn't be visible to the caller\n> snapshot, so it should be safe. I can add a comment for that if I'm not\n> mistaken.\n\nThere's no comment warning that you shouldn't use CheckBuffer() to check\nevery buffer in shared buffers, or every relfilenode on disk. The latter\nwould be quite a reasonable thing, given it'd avoid needing to connect\nto every database etc.\n\n\n> For concurrent smgrextend(), we read the relation size at the beginning of the\n> function, so we shouldn't read newly allocated blocks. But you're right that\n> it's still possible to get the size that includes a newly allocated block\n> that can be concurrently written. We can avoid that be holding a\n> LOCKTAG_RELATION_EXTEND lock when reading the relation size. Would that be ok?\n\nThat could possibly work - but currently CheckBuffer() doesn't get a\nrelation, nor are the comments explaining that it has to be a relation\nin the current database or anything.\n\nI hadn't yet looked at the caller - I just started looking at\nCheckBuffer() this because it caused compilation failures after rebasing\nmy aio branch onto master (there's no IO locks anymore).\n\n\n\nLooking at the caller:\n- This is not a concurrency safe pattern:\n\n\t/* Check if relation exists. leaving if there is no such relation */\n\tif (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n\t\treturn;\n\n\trelation = relation_open(relid, AccessShareLock);\n\n there's a pretty obvious time-to-check-time-to-use danger here.\n\n- pg_relation_check_pages()'s docs say \"valid enough to safely be loaded\n into the server's shared buffers\". I think that's overpromising by a\n lot. It sounds like it verifies that the page cannot cause a crash or\n such when accessed - but it obviously does no such thing.\n\n- Why does check_one_relation() *silently* ignore when it's being\n passed a temporary table, or a relkind without storage?\n\n- I don't think it's good that check_one_relation() releases relation\n locks after access, but I know that others think that's fine (I think\n it's only fine for catalog relations).\n\n- I realize permission to pg_relation_check_pages() is not granted to\n non-superusers by default, but shouldn't it still perform relation\n access checks?\n\n- why does check_relation_fork() pstrdup the path?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Oct 2020 19:58:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 10:58 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-10-30 10:01:08 +0800, Julien Rouhaud wrote:\n> > On Fri, Oct 30, 2020 at 2:17 AM Andres Freund <andres@anarazel.de> wrote:\n> > > The code does IO while holding the buffer mapping lock. That seems\n> > > *entirely* unacceptable to me. That basically locks 1/128 of shared\n> > > buffers against concurrent mapping changes, while reading data that is\n> > > likely not to be on disk? Seriously?\n> >\n> > The initial implementation had a different approach, reading the buffer once\n> > without holding the buffer mapping lock (which could lead to some false\n> > positive in some unlikely scenario), and only if a corruption is detected the\n> > read is done once again *while holding the buffer mapping lock* to ensure it's\n> > not a false positive. Some benchmarking showed that the performance was worse,\n> > so we dropped that optimisation. Should we go back to something like that or\n> > do you have a better way to ensure a consistent read of a buffer which isn't in\n> > shared buffers?\n>\n> I suspect that you're gonna need something quite different than what the\n> function is doing right now. Not because such a method will be faster in\n> isolation, but because there's a chance to have it correct and not have\n> a significant performance impact onto the rest of the system.\n>\n> I've not thought about it in detail yet. Is suspect you'll need to\n> ensure there is a valid entry in the buffer mapping table for the buffer\n> you're processing. By virtue of setting BM_IO_IN_PROGRESS on that entry\n> you're going to prevent concurrent IO from starting until your part is\n> done.\n\nSo I'm assuming that the previous optimization to avoid almost every\ntime doing an IO while holding a buffer mapping lock isn't an option?\nIn that case, I don't see any other option than reverting the patch\nand discussing a new approach.\n\n\n",
"msg_date": "Fri, 30 Oct 2020 11:58:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On 2020-10-30 11:58:13 +0800, Julien Rouhaud wrote:\n> So I'm assuming that the previous optimization to avoid almost every\n> time doing an IO while holding a buffer mapping lock isn't an option?\n> In that case, I don't see any other option than reverting the patch\n> and discussing a new approach.\n\nI think its pretty much *never* OK to do IO while holding a buffer\nmapping lock. You're locking a significant fraction of shared buffers\nover IO. That's just not OK. Don't think there's any place doing so\ncurrently either.\n\n\n",
"msg_date": "Thu, 29 Oct 2020 22:08:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 10:08:52PM -0700, Andres Freund wrote:\n> I think its pretty much *never* OK to do IO while holding a buffer\n> mapping lock. You're locking a significant fraction of shared buffers\n> over IO. That's just not OK. Don't think there's any place doing so\n> currently either.\n\nThere is no place doing that on HEAD.\n\nThis specific point was mentioned in the first message of this thread,\n7th paragraph. That was a long thread, so it is easy to miss:\nhttps://www.postgresql.org/message-id/CAOBaU_aVvMjQn=ge5qPiJOPMmOj5=ii3st5Q0Y+WuLML5sR17w@mail.gmail.com\n\nI am wondering what you have in mind regarding the use of\nBM_IO_IN_PROGRESS or a similar flag. Wouldn't that imply some\nconsequences for other existing buffers in the table, like a possible\neviction? I'd like to think that we should not do any manipulation of\nthe buffer tables in this case. Hence, in order to prevent a\nconcurrent activity to load in shared buffers the page currently\nchecked on disk, I got to think that we would need something new here,\nlike a filtering hash table that would be checked each time a backend\ntries to insert an entry into the buffer tables. That's something I\nwas wondering here:\nhttps://www.postgresql.org/message-id/20200316030638.GA2331@paquier.xyz\nI called that a preemptive lock, but you could also call that a\ndiscard filter or a virtual pin, just something to mean that a page\nlocked this way cannot be loaded into the shared buffers. I'd like to\nthink that this should not touch the existing buffer table, but it\nwould impact the performance when attempting to insert an entry in the\ntables, as anything would need to be pre-checked.\n\nAssuming that we could make this thing work without holding the\npartition lock, and assuming that we only hold a share lock on the \nrelation, we have two cases:\n1) If the buffer is in shared buffers, we have the APIs to solve that\nby using a pin, unlock the partition, and then do the I/O. (Still\nthat's unsafe with the smgrwrite() argument?)\n2) If the buffer is not in shared buffers, we don't have what it takes\nto solve the problem yet. But even if we solve this problem, we will\nnever really be sure that this is entirely safe, as per the argument\nwith concurrent smgrwrite() calls. Current in-core code assumes that\nthis can happen only for init forks of unlogged relations which would\nnot be visible yet in the backend doing a page check, still it can be\nreally easy to break this assumption with any new code added by a new\nfeature.\n\nThese arguments bring down to reduce the scope of CheckBuffer() as\nfollows:\n- Use an AEL on the relation, pass down a Relation instead of\nSMgrRelation, and add on the way an assertion to make sure that the\ncaller holds an AEL on the relation. I wanted to study the possiblity\nto use that stuff for base backups, but if you bring the concurrent\nsmgrwrite() calls into the set of possibilities this shuts down the\nwhole study from the start.\n- It is still useful to check that a page is in shared buffers IMO, so\nas if it is dirty we just discard it from the checks and rely on the\nnext checkpoint to do a flush. It is also useful to check the state\nof the on-disk data is good or not if the page is not dirty, as the\npage could have gone rogue on-disk while a system was up for weeks.\n--\nMichael",
"msg_date": "Mon, 2 Nov 2020 10:05:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nI'm a bit limited writing - one handed for a while following an injury\non Friday...\n\nOn 2020-11-02 10:05:25 +0900, Michael Paquier wrote:\n> On Thu, Oct 29, 2020 at 10:08:52PM -0700, Andres Freund wrote:\n> > I think its pretty much *never* OK to do IO while holding a buffer\n> > mapping lock. You're locking a significant fraction of shared buffers\n> > over IO. That's just not OK. Don't think there's any place doing so\n> > currently either.\n> \n> There is no place doing that on HEAD.\n\nErr?\n\n\t/* see if the block is in the buffer pool or not */\n\tLWLockAcquire(partLock, LW_SHARED);\n\tbuf_id = BufTableLookup(&buf_tag, buf_hash);\n\tif (buf_id >= 0)\n\t{\n...\n\t\t/* Read the buffer from disk, with the I/O lock still held */\n\t\tsmgrread(smgr, forknum, blkno, buffer);\n\t\tLWLockRelease(BufferDescriptorGetIOLock(bufdesc));\n\t}\n\telse\n\t{\n\t\t/*\n\t\t * Simply read the buffer. There's no risk of modification on it as\n\t\t * we are holding the buffer pool partition mapping lock.\n\t\t */\n\t\tsmgrread(smgr, forknum, blkno, buffer);\n\t}\n\n\t/* buffer lookup done, so now do its check */\n\tLWLockRelease(partLock);\n\nHow is this not doing IO while holding a buffer mapping lock?\n\n\n> This specific point was mentioned in the first message of this thread,\n> 7th paragraph. That was a long thread, so it is easy to miss:\n> https://www.postgresql.org/message-id/CAOBaU_aVvMjQn=ge5qPiJOPMmOj5=ii3st5Q0Y+WuLML5sR17w@mail.gmail.com\n\nThe code clearly doesnt implement it that way.\n\n\n> I am wondering what you have in mind regarding the use of\n> BM_IO_IN_PROGRESS or a similar flag. Wouldn't that imply some\n> consequences for other existing buffers in the table, like a possible\n> eviction?\n\nYou'd need exactly one empty buffer for that - it can be reused for the\nnext to-be-checked buffer.\n\n\n> I'd like to think that we should not do any manipulation of\n> the buffer tables in this case.\n\nWhy? Its the way we lock buffers - why is this so special that we need\nto do differently?\n\n\n> Hence, in order to prevent a\n> concurrent activity to load in shared buffers the page currently\n> checked on disk, I got to think that we would need something new here,\n> like a filtering hash table that would be checked each time a backend\n> tries to insert an entry into the buffer tables.\n\nThats going to slow down everything a bit - the mapping already is a\nbottleneck.\n\n\n> 1) If the buffer is in shared buffers, we have the APIs to solve that\n> by using a pin, unlock the partition, and then do the I/O. (Still\n> that's unsafe with the smgrwrite() argument?)\n\nThats why you need an appropriate relation lock... Something CheckBuffer\ndidnt bother to document. Its a restriction, but one we probably can\nlive with.\n\n\n> 2) If the buffer is not in shared buffers, we don't have what it takes\n> to solve the problem yet.\n\nWe do. Set up enough state for the case to be otherwise the same as the\nin s_b case.\n\n> But even if we solve this problem, we will\n> never really be sure that this is entirely safe, as per the argument\n> with concurrent smgrwrite() calls. Current in-core code assumes that\n> this can happen only for init forks of unlogged relations which would\n> not be visible yet in the backend doing a page check, still it can be\n> really easy to break this assumption with any new code added by a new\n> feature.\n\nIt also happens in a few other cases than just init forks. But\nvisibility & relation locking can take care of that. But you need to\ndocument that. If the locking allows concurent readers - and especially\nconcurrent writers, then you cant really use smgrwite for anything but\nrelation extension.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 1 Nov 2020 17:39:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, Nov 01, 2020 at 05:39:40PM -0800, Andres Freund wrote:\n> I'm a bit limited writing - one handed for a while following an injury\n> on Friday...\n\nOops. Take care.\n\n> On 2020-11-02 10:05:25 +0900, Michael Paquier wrote:\n> > There is no place doing that on HEAD.\n> \n> Err?\n> \n> \t/* see if the block is in the buffer pool or not */\n> \tLWLockAcquire(partLock, LW_SHARED);\n> \tbuf_id = BufTableLookup(&buf_tag, buf_hash);\n>\n> [...]\n>\n> How is this not doing IO while holding a buffer mapping lock?\n\nWell, other than the one we are discussing of course :)\n\n> \n> \n> > This specific point was mentioned in the first message of this thread,\n> > 7th paragraph. That was a long thread, so it is easy to miss:\n> > https://www.postgresql.org/message-id/CAOBaU_aVvMjQn=ge5qPiJOPMmOj5=ii3st5Q0Y+WuLML5sR17w@mail.gmail.com\n> \n> The code clearly doesnt implement it that way.\n> \n> \n> > I am wondering what you have in mind regarding the use of\n> > BM_IO_IN_PROGRESS or a similar flag. Wouldn't that imply some\n> > consequences for other existing buffers in the table, like a possible\n> > eviction?\n> \n> You'd need exactly one empty buffer for that - it can be reused for the\n> next to-be-checked buffer.\n> \n> \n> > I'd like to think that we should not do any manipulation of\n> > the buffer tables in this case.\n> \n> Why? Its the way we lock buffers - why is this so special that we need\n> to do differently?\n> \n> \n> > Hence, in order to prevent a\n> > concurrent activity to load in shared buffers the page currently\n> > checked on disk, I got to think that we would need something new here,\n> > like a filtering hash table that would be checked each time a backend\n> > tries to insert an entry into the buffer tables.\n> \n> Thats going to slow down everything a bit - the mapping already is a\n> bottleneck.\n> \n> \n> > 1) If the buffer is in shared buffers, we have the APIs to solve that\n> > by using a pin, unlock the partition, and then do the I/O. (Still\n> > that's unsafe with the smgrwrite() argument?)\n> \n> Thats why you need an appropriate relation lock... Something CheckBuffer\n> didnt bother to document. Its a restriction, but one we probably can\n> live with.\n> \n> \n> > 2) If the buffer is not in shared buffers, we don't have what it takes\n> > to solve the problem yet.\n> \n> We do. Set up enough state for the case to be otherwise the same as the\n> in s_b case.\n> \n> > But even if we solve this problem, we will\n> > never really be sure that this is entirely safe, as per the argument\n> > with concurrent smgrwrite() calls. Current in-core code assumes that\n> > this can happen only for init forks of unlogged relations which would\n> > not be visible yet in the backend doing a page check, still it can be\n> > really easy to break this assumption with any new code added by a new\n> > feature.\n> \n> It also happens in a few other cases than just init forks. But\n> visibility & relation locking can take care of that. But you need to\n> document that. If the locking allows concurent readers - and especially\n> concurrent writers, then you cant really use smgrwite for anything but\n> relation extension.\n\n--\nMichael",
"msg_date": "Mon, 2 Nov 2020 10:45:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi\n\nOn 2020-11-02 10:45:00 +0900, Michael Paquier wrote:\n> On Sun, Nov 01, 2020 at 05:39:40PM -0800, Andres Freund wrote:\n> > I'm a bit limited writing - one handed for a while following an injury\n> > on Friday...\n> \n> Oops. Take care.\n\nThanks!\n\n\n> > On 2020-11-02 10:05:25 +0900, Michael Paquier wrote:\n> > > There is no place doing that on HEAD.\n> > \n> > Err?\n> > How is this not doing IO while holding a buffer mapping lock?\n> \n> Well, other than the one we are discussing of course :)\n\nI am not following? Were you just confirming that its not a thing we do?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 1 Nov 2020 17:50:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Sun, Nov 01, 2020 at 05:50:06PM -0800, Andres Freund wrote:\n> On 2020-11-02 10:45:00 +0900, Michael Paquier wrote:\n> > On 2020-11-02 10:05:25 +0900, Michael Paquier wrote:\n> > > > There is no place doing that on HEAD.\n> > > \n> > > Err?\n> > > How is this not doing IO while holding a buffer mapping lock?\n> > \n> > Well, other than the one we are discussing of course :)\n> \n> I am not following? Were you just confirming that its not a thing we do?\n\nI meant that this is not done in any place other than the one\nintroduced by c780a7a. So we have one place where it happens, and\nno places before c780a7a.\n--\nMichael",
"msg_date": "Mon, 2 Nov 2020 16:29:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n> I think this needs to be quickly reworked or reverted.\n\nI don't like this patch, either. In addition to what Andres mentioned,\nCheckBuffer() is a completely-special case mechanism which can't be\nreused by anything else. In particular, the amcheck for heap stuff\nwhich I recently committed (866e24d47db1743dfcff5bd595b57e3a143f2cb1)\nwould really like a way to examine a buffer without risking an error\nif PageIsVerified() should happen to fail, but this patch is of\nabsolutely no use in getting there, because CheckBuffer() doesn't give\nthe caller any way to access the contents of the buffer. It can only\ndo the checks that it knows how to do, and that's it. That doesn't\nseem like a good design.\n\nI don't like the fact that CheckBuffer() silently skips dirty buffers,\neither. The comment should really say that it checks the state of a\nbuffer without loading it into shared buffers, except sometimes it\ndoesn't actually check it. That doesn't seem like the behavior users\nreally want, and it's not clear that there is any really good reason\nfor it. If the buffer is in shared buffers, we could take a share-lock\non the buffer and copy the contents of the page as it exists on disk,\nand then still check it.\n\nIt feels really confusing to me that the user-exposed function here is\ncalled pg_relation_check_pages(). How is the user supposed to\nunderstand the difference between what this function does and what the\nnew verify_heapam() in amcheck does? The answer is that the latter\ndoes far more extensive checks, but this isn't obvious from the SGML\ndocumentation, which says only that the blocks are \"verified,\" as if\nan end-user can reasonably be expected to know what that means. It\nseems likely to lead users to the belief that if this function passes,\nthey are in good shape, which is extremely far from being true. Just\nlook at what PageIsVerified() checks compared to what verify_heapam()\nchecks.\n\nIn fact, I would argue that this functionality ought to live in\namcheck rather than core, though there could usefully be enabling\nfunctions in core.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 Nov 2020 12:35:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-02 12:35:30 -0500, Robert Haas wrote:\n> On Thu, Oct 29, 2020 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think this needs to be quickly reworked or reverted.\n\nI think it's fairly clear by now that revert is appropriate for now.\n\n\n> I don't like this patch, either. In addition to what Andres mentioned,\n> CheckBuffer() is a completely-special case mechanism which can't be\n> reused by anything else. In particular, the amcheck for heap stuff\n> which I recently committed (866e24d47db1743dfcff5bd595b57e3a143f2cb1)\n> would really like a way to examine a buffer without risking an error\n> if PageIsVerified() should happen to fail, but this patch is of\n> absolutely no use in getting there, because CheckBuffer() doesn't give\n> the caller any way to access the contents of the buffer. It can only\n> do the checks that it knows how to do, and that's it. That doesn't\n> seem like a good design.\n\nWouldn't this be better served by having a ReadBufferExtended() flag,\npreventing erroring out and zeroing the buffer? I'm not sure that\nhandling both the case where the buffer contents need to be valid and\nthe one where it doesn't will make for a good API.\n\n\n> I don't like the fact that CheckBuffer() silently skips dirty buffers,\n> either. The comment should really say that it checks the state of a\n> buffer without loading it into shared buffers, except sometimes it\n> doesn't actually check it.\n\nYea, I don't see a good reason for that either. There's reasons for\ndirty buffers that aren't WAL logged - so if the on-disk page is broken,\na standby taken outside pg_basebackup would possibly still end up with a\ncorrupt on-disk page. Similar with a crash restart.\n\n\n> If the buffer is in shared buffers, we could take a share-lock\n> on the buffer and copy the contents of the page as it exists on disk,\n> and then still check it.\n\nDon't think we need a share lock. That still allows the buffer to be\nwritten out (and thus a torn read). What we need is to set\nBM_IO_IN_PROGRESS on the buffer in question - only one backend can set\nthat. And then unset that again, without unsetting\nBM_DIRTY/BM_JUST_DIRTIED.\n\n\n> It feels really confusing to me that the user-exposed function here is\n> called pg_relation_check_pages(). How is the user supposed to\n> understand the difference between what this function does and what the\n> new verify_heapam() in amcheck does? The answer is that the latter\n> does far more extensive checks, but this isn't obvious from the SGML\n> documentation, which says only that the blocks are \"verified,\" as if\n> an end-user can reasonably be expected to know what that means. It\n> seems likely to lead users to the belief that if this function passes,\n> they are in good shape, which is extremely far from being true. Just\n> look at what PageIsVerified() checks compared to what verify_heapam()\n> checks.\n\nYea I had similar thoughts, it should just be called\npg_checksum_verify_relation() or something.\n\n\n> In fact, I would argue that this functionality ought to live in\n> amcheck rather than core, though there could usefully be enabling\n> functions in core.\n\nI'm not really convinced by this though. It's not really AM\nspecific - works for all types of relations with storage; don't really\nobject either...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Nov 2020 11:34:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-11-02 12:35:30 -0500, Robert Haas wrote:\n> > It feels really confusing to me that the user-exposed function here is\n> > called pg_relation_check_pages(). How is the user supposed to\n> > understand the difference between what this function does and what the\n> > new verify_heapam() in amcheck does? The answer is that the latter\n> > does far more extensive checks, but this isn't obvious from the SGML\n> > documentation, which says only that the blocks are \"verified,\" as if\n> > an end-user can reasonably be expected to know what that means. It\n> > seems likely to lead users to the belief that if this function passes,\n> > they are in good shape, which is extremely far from being true. Just\n> > look at what PageIsVerified() checks compared to what verify_heapam()\n> > checks.\n>\n> Yea I had similar thoughts, it should just be called\n> pg_checksum_verify_relation() or something.\n>\n\n+1.\n\n\n> > In fact, I would argue that this functionality ought to live in\n> > amcheck rather than core, though there could usefully be enabling\n> > functions in core.\n>\n> I'm not really convinced by this though. It's not really AM\n> specific - works for all types of relations with storage; don't really\n> object either...\n\nYeah, I'm not sure about that one either. Also what would happen\nif/when we get checksums on things that aren't even relations? (though\nmaybe that goes for other parts of amcheck at some point as well?)\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 3 Nov 2020 09:31:20 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Nov 03, 2020 at 09:31:20AM +0100, Magnus Hagander wrote:\n> On Mon, Nov 2, 2020 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2020-11-02 12:35:30 -0500, Robert Haas wrote:\n>>> It feels really confusing to me that the user-exposed function here is\n>>> called pg_relation_check_pages(). How is the user supposed to\n>>> understand the difference between what this function does and what the\n>>> new verify_heapam() in amcheck does? The answer is that the latter\n>>> does far more extensive checks, but this isn't obvious from the SGML\n>>> documentation, which says only that the blocks are \"verified,\" as if\n>>> an end-user can reasonably be expected to know what that means. It\n>>> seems likely to lead users to the belief that if this function passes,\n>>> they are in good shape, which is extremely far from being true. Just\n>>> look at what PageIsVerified() checks compared to what verify_heapam()\n>>> checks.\n\nThe cases of verify_heapam() are much wider as they target only one\nAM, while this stuff should remain more general. There seems to be\nsome overlap in terms of the basic checks done by bufmgr.c, and\nthe fact that you may not want to be that much intrusive with the\nexisting buffer pool as well when running the AM checks. It also\nseems to me that the use cases are quite different for both, the\noriginal goal of this thread is to detect physical corruptions for all\nAMs, while verify_heapam() looks after logical corruptions in the way\nheap is handled.\n\n>> Yea I had similar thoughts, it should just be called\n>> pg_checksum_verify_relation() or something.\n> \n> +1.\n\nI mentioned that upthread, is there really a dependency with checksums\nhere? There are two cases where we can still apply some checks on a\npage, without any need of checksums:\n- The state of the page header.\n- Zeroed page if pd_upper is 0. Those pages are valid, and don't have\na checksum computed.\nSo it seems to me that when it comes to relation pages, then the\ncheck of a page should answer to the question: is this page loadable\nin shared buffers, or not?\n\n>>> In fact, I would argue that this functionality ought to live in\n>>> amcheck rather than core, though there could usefully be enabling\n>>> functions in core.\n>>\n>> I'm not really convinced by this though. It's not really AM\n>> specific - works for all types of relations with storage; don't really\n>> object either...\n> \n> Yeah, I'm not sure about that one either. Also what would happen\n> if/when we get checksums on things that aren't even relations? (though\n> maybe that goes for other parts of amcheck at some point as well?)\n\nI also thought about amcheck when looking at this thread, but it did\nnot seem the right place as this applies to any AM able that could\nload stuff into the shared buffers.\n--\nMichael",
"msg_date": "Tue, 3 Nov 2020 18:36:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Nov 02, 2020 at 11:34:57AM -0800, Andres Freund wrote:\n> On 2020-11-02 12:35:30 -0500, Robert Haas wrote:\n>> On Thu, Oct 29, 2020 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n>>> I think this needs to be quickly reworked or reverted.\n> \n> I think it's fairly clear by now that revert is appropriate for now.\n\nYep, that's clear. I'll deal with that tomorrow. That's more than a\nsimple rework.\n\n>> I don't like this patch, either. In addition to what Andres mentioned,\n>> CheckBuffer() is a completely-special case mechanism which can't be\n>> reused by anything else. In particular, the amcheck for heap stuff\n>> which I recently committed (866e24d47db1743dfcff5bd595b57e3a143f2cb1)\n>> would really like a way to examine a buffer without risking an error\n>> if PageIsVerified() should happen to fail, but this patch is of\n>> absolutely no use in getting there, because CheckBuffer() doesn't give\n>> the caller any way to access the contents of the buffer. It can only\n>> do the checks that it knows how to do, and that's it. That doesn't\n>> seem like a good design.\n> \n> Wouldn't this be better served by having a ReadBufferExtended() flag,\n> preventing erroring out and zeroing the buffer? I'm not sure that\n> handling both the case where the buffer contents need to be valid and\n> the one where it doesn't will make for a good API.\n\nIf you grep for ReadBuffer_common() is some of the emails I sent.. I\nwas rather interested in something like that.\n\n>> I don't like the fact that CheckBuffer() silently skips dirty buffers,\n>> either. The comment should really say that it checks the state of a\n>> buffer without loading it into shared buffers, except sometimes it\n>> doesn't actually check it.\n> \n> Yea, I don't see a good reason for that either. There's reasons for\n> dirty buffers that aren't WAL logged - so if the on-disk page is broken,\n> a standby taken outside pg_basebackup would possibly still end up with a\n> corrupt on-disk page. Similar with a crash restart.\n\nEr, if you don't skip dirty buffers, wouldn't you actually report some\npages as broken if attempting to run those in a standby who may have\nsome torn pages from a previous base backup? You could still run into\nproblems post-promotion, until the first checkpoint post-recovery\nhappens, no?\n\n>> If the buffer is in shared buffers, we could take a share-lock\n>> on the buffer and copy the contents of the page as it exists on disk,\n>> and then still check it.\n> \n> Don't think we need a share lock. That still allows the buffer to be\n> written out (and thus a torn read). What we need is to set\n> BM_IO_IN_PROGRESS on the buffer in question - only one backend can set\n> that. And then unset that again, without unsetting\n> BM_DIRTY/BM_JUST_DIRTIED.\n\nIf that can work, we could make use of some of that for base backups\nfor a single retry of a page that initially failed.\n--\nMichael",
"msg_date": "Tue, 3 Nov 2020 18:46:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 2:35 PM Andres Freund <andres@anarazel.de> wrote:\n> Wouldn't this be better served by having a ReadBufferExtended() flag,\n> preventing erroring out and zeroing the buffer? I'm not sure that\n> handling both the case where the buffer contents need to be valid and\n> the one where it doesn't will make for a good API.\n\nI'm not sure. The goal I had in mind was giving a caller a way to get\na copy of a buffer even if it's one we wouldn't normally admit into\nshared_buffers. I think it's probably a bad idea to allow for a back\ndoor where things that fail PageIsVerified() can nevertheless escape\ninto the buffer, but that doesn't mean a checker or recovery tool\nshouldn't be allowed to see them.\n\n> > If the buffer is in shared buffers, we could take a share-lock\n> > on the buffer and copy the contents of the page as it exists on disk,\n> > and then still check it.\n>\n> Don't think we need a share lock. That still allows the buffer to be\n> written out (and thus a torn read). What we need is to set\n> BM_IO_IN_PROGRESS on the buffer in question - only one backend can set\n> that. And then unset that again, without unsetting\n> BM_DIRTY/BM_JUST_DIRTIED.\n\nOK.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 3 Nov 2020 11:52:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
},
{
"msg_contents": "On Tue, Nov 03, 2020 at 06:46:12PM +0900, Michael Paquier wrote:\n> Yep, that's clear. I'll deal with that tomorrow. That's more than a\n> simple rework.\n\nThis part is now done as of e152506a.\n--\nMichael",
"msg_date": "Wed, 4 Nov 2020 10:32:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Online checksums verification in the backend"
}
] |
[
{
"msg_contents": "In the thread \nhttps://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m\nI've suggested to split one big StdRdOption that is used for options storage \ninto into Options structures individual for each relkind and each relam\n\nThat patch have been split into smaller parts, most of them were already \ncommit:\nhttps://commitfest.postgresql.org/25/2294/ - Remove StdRdOptions from AM\nhttps://commitfest.postgresql.org/25/2297/ - Do not use StdRdOption for \npartitioned tables\nhttps://commitfest.postgresql.org/25/2295/ - Some useful Asserts for view-\nrelated macroses.\n\nAnd here goes the last part of StrRdOptions removal patch, where StdRdOptions \nis replaced with HeapOptions and ToastOptions.\n\nWhat did I do here.\n\n- Added HeapOptions and ToastOptions structues\n- Moved options building tab for autovacuum into AUTOVACUUM_RELOPTIONS macro, \nso it can be used in relopt_parse_elt tab both for heap and toast\n- Changed everywhere in the code, where old heap_reloptions is used, to use \nnew heap_reloptions or toast_reloptions\n- Changed heap & toast option fetching macros to use HeapOptions and \nToastOptions\n- Added Asserts to heap and toast options macros. Now we finally can do it.\n\nWhat I did not do\n\n- I've split fillfactor related macros into heap and toast like \nRelationGetFillFactor will become HeapGetFillFactor and ToastGetFillFactor. I \nhave to do it, because now they handle different structure.\nbut there are heap only option macros like RelationGetParallelWorkers that \nshould be better called HeapGetParallelWorkers, as it is heap related. But I \ndid not change the name, as somebody from core team (I think it was Alvaro, it \nwas a while ago) asked me not to change macros names unless in is inavoidable. \nSo I kept the names, though I still think that naming them with Heap prefix \nwill make code more clear.\n\n- vacuum_index_cleanup and vacuum_truncate options were added recently. They \nwere added into StdRdOptions. I think their place is inside AutoVacOpts not in \nStdRdOptions, but did not dare to change it. If you see it the same way as I \nsee, please let me know, I will move it to a proper place.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)",
"msg_date": "Fri, 06 Dec 2019 18:18:48 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "[PATCH] Finally split StdRdOptions into HeapOptions and ToastOptions"
},
{
"msg_contents": "On Fri, Dec 06, 2019 at 06:18:48PM +0300, Nikolay Shaplov wrote:\n> In the thread \n> https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m\n> I've suggested to split one big StdRdOption that is used for options storage \n> into into Options structures individual for each relkind and each relam\n> \n> And here goes the last part of StrRdOptions removal patch, where StdRdOptions \n> is replaced with HeapOptions and ToastOptions.\n\n-typedef struct StdRdOptions\n+/*\n+ * HeapOptions\n+ * Binary representation of relation options for Heap relations.\n+ */\n+typedef struct HeapOptions\n\nI think that it makes sense to split relation options dedicated to\nheap into their own parsing structure, because those options are\nactually related to the table AM heap. However, I think that this\npatch is not ambitious enough in the work which is done and that\nthings could move into a more generic direction. At the end of the\nday, I'd like to think that we should have something like:\n- Heap-related reloptions are built as part of its AM handler in\nheapam_handler.c, with reloptions.c holding no more references to\nheap. At all.\n- The table AM option parsing follows a model close to what is done\nfor indexes in terms of option parsing, moving the responsibility to\ndefine relation options to each table AM.\n- Toast is an interesting case, as table AMs may want to use toast\ntables. Or not. Robert may be able to comment more on that as he has\nworked in this area for bd12499.\n--\nMichael",
"msg_date": "Mon, 9 Dec 2019 12:11:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "В письме от понедельник, 9 декабря 2019 г. 12:11:17 MSK пользователь Michael \nPaquier написал:\n> On Fri, Dec 06, 2019 at 06:18:48PM +0300, Nikolay Shaplov wrote:\n> > In the thread\n> > https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m\n> > I've suggested to split one big StdRdOption that is used for options\n> > storage into into Options structures individual for each relkind and each\n> > relam\n> > \n> > And here goes the last part of StrRdOptions removal patch, where\n> > StdRdOptions is replaced with HeapOptions and ToastOptions.\n> \n> -typedef struct StdRdOptions\n> +/*\n> + * HeapOptions\n> + * Binary representation of relation options for Heap relations.\n> + */\n> +typedef struct HeapOptions\n> \n> I think that it makes sense to split relation options dedicated to\n> heap into their own parsing structure, because those options are\n> actually related to the table AM heap. However, I think that this\n> patch is not ambitious enough in the work which is done and that\n> things could move into a more generic direction. At the end of the\n> day, I'd like to think that we should have something like:\n> - Heap-related reloptions are built as part of its AM handler in\n> heapam_handler.c, with reloptions.c holding no more references to\n> heap. At all.\n> - The table AM option parsing follows a model close to what is done\n> for indexes in terms of option parsing, moving the responsibility to\n> define relation options to each table AM.\n> - Toast is an interesting case, as table AMs may want to use toast\n> tables. Or not. Robert may be able to comment more on that as he has\n> worked in this area for bd12499.\n\nOh, yeah, I forget that relations now also have AM :-)\n\nBut the truth is that my goal is to move all code that defines all option \nnames, min/max values etc, move it inside am code. To move data from \nboolRelOpts, intRelOpts, realRelOpts, enumRelOpts, enumRelOpts from \nreloptions.c into the code that implement AMs that uses these options.\n\nI did it for indexes in patch I've offered several years ago. Now we have also \nrelaion AM. \n\nBut I would prefer to fix index AM relioptions first, and then copy that \nsolution for relations. \n\nBecause if I frist copy AM solution from indexes to relation, then I will have \nto fix it in two places.\n\nSo I would prefer to keep reloptions for relations in relations.c, only split \nthem into HeapOptions and ToastOptions, then change AM for indexes moving \noption definition into AM's and then clone the solution for relations.\n\nThis seems to be most simple and most logical way.\n\nPS. I've checked the patch against current master. No changes were needed, but \nI am attaching a diff made against current master, just in case.",
"msg_date": "Wed, 04 Mar 2020 22:58:31 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "On Wed, Mar 04, 2020 at 10:58:31PM +0300, Nikolay Shaplov wrote:\n> But the truth is that my goal is to move all code that defines all option \n> names, min/max values etc, move it inside am code. To move data from \n> boolRelOpts, intRelOpts, realRelOpts, enumRelOpts, enumRelOpts from \n> reloptions.c into the code that implement AMs that uses these options.\n> \n> I did it for indexes in patch I've offered several years ago. Now we have also \n> relaion AM. \n> \n> But I would prefer to fix index AM relioptions first, and then copy that \n> solution for relations.\n\nHow do you think that this part should be changed then, if this needs\nany changes? It seems to me that we have a rather correct layer for\nindex AMs by requiring each one to define the available option set\nusing indoptions through their handler, with option fetching macros\nlocated within each AM.\n\n> Because if I first copy AM solution from indexes to relation, then I will have \n> to fix it in two places.\n> \n> So I would prefer to keep reloptions for relations in relations.c, only split \n> them into HeapOptions and ToastOptions, then change AM for indexes moving \n> option definition into AM's and then clone the solution for relations.\n\nThen, for table AMs, it seems to me that you are right for long-term\nperspective to have the toast-related options in reloptions.c, or\nperhaps directly located within more toast-related file (?) as table\nAMs interact with toast using heapam_relation_needs_toast_table and\nsuch callbacks. So for heap, moving the option handling to roughly\nheapam_handler.c is a natural move, though this requires a redesign of\nthe existing structure to use option handling closer to what\nindoptions does, but for tables.\n--\nMichael",
"msg_date": "Sat, 7 Mar 2020 10:03:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "В письме от суббота, 7 марта 2020 г. 10:03:40 MSK пользователь Michael Paquier \nнаписал:\n> On Wed, Mar 04, 2020 at 10:58:31PM +0300, Nikolay Shaplov wrote:\n> > But the truth is that my goal is to move all code that defines all option\n> > names, min/max values etc, move it inside am code. To move data from\n> > boolRelOpts, intRelOpts, realRelOpts, enumRelOpts, enumRelOpts from\n> > reloptions.c into the code that implement AMs that uses these options.\n> > \n> > I did it for indexes in patch I've offered several years ago. Now we have\n> > also relaion AM.\n> > \n> > But I would prefer to fix index AM relioptions first, and then copy that\n> > solution for relations.\n> \n> How do you think that this part should be changed then, if this needs\n> any changes? It seems to me that we have a rather correct layer for\n> index AMs by requiring each one to define the available option set\n> using indoptions through their handler, with option fetching macros\n> located within each AM.\n\nMy idea is like this:\n\nNow information about what reloption AM has and how they should be parsed are \nstored in two places. One is relioptions.c and boolRelOpts, intRelOpts, \nrealRelOpts, enumRelOpts, enumRelOpts arrays, another is \nstatic const relopt_parse_elt tab[] inside amoptions_function amoptions; \nfunction of each AM.\n\nMy suggestion is to merge all this data into one structure. Like \noption_definition\n\n\n/* generic struct to hold shared data */ \ntypedef struct option_definition_basic \n{ \n const char *name; /* must be first (used as list termination \n * marker) */ \n const char *desc; \n LOCKMODE lockmode; \n option_definition_flags flags; \n option_type type; \n int struct_offset; /* offset of the value in Bytea representation \n*/\n} option_definition_basic;\n\ntypedef struct option_definition_bool \n{ \n option_definition_basic base; \n bool default_val; \n} option_definition_bool; \n \ntypedef struct option_definition_int \n{ \n option_definition_basic base; \n int default_val; \n int min; \n int max; \n} option_definition_int; \n \ntypedef struct option_definition_real \n{ \n option_definition_basic base; \n double default_val; \n double min; \n double max; \n} option_definition_real; \n \ntypedef struct option_definition_enum \n{ \n option_definition_basic base; \n const char **allowed_values;/* Null terminated array of allowed values for \n * the option */ \n int default_val; /* Number of item of allowed_values array */ \n} option_definition_enum; \n\nThis example from my old code, I guess I should add a union here, this will \nmake code more readable... But the idea is the same, we have one structure \nthat describes how this option should be parsed.\n\nThen we gather all options definitions for one object (for example for index) \ninto a structure called OptionsDefSet \n\ntypedef struct OptionDefSet\n{ \n option_definition_basic **definitions; \n int num; /* Number of catalog items in use */ \n int num_allocated; /* Number of catalog items allocated */ \n bool forbid_realloc; /* If number of items of the catalog were \n * strictly set to certain value do no allow \n * adding more idems */ \n Size struct_size; /* Size of a structure for options in binary \n * representation */ \n postprocess_bytea_options_function postprocess_fun; /* This function is \n * called after options \n * were converted in \n * Bytea represenation. \n * Can be used for extra\n * validation and so on */\n char *namespace; /* Catalog is used for options from this \n * namespase */ \n} OptionDefSet;\n\n\nThis DefSet will have all we need to parse options for certain object. \n\nThis all will be stored in the AM code.\n\nThen we replace amoptions_function amoptions; function of the AM with \namoptions_def_set_function amoptions_def_set; function. This function will \nreturn OptionDefSet to whoever called it.\n\nSo whenever we need an option in index_reloptions from reloption.c we get this \nDefSet, and then call a function that parses options using this data.\n\n\nWhy do we need this.\n\n1. To have all options related data in one place. Now when a developer wants \nto add an option, he needs to find all places in the code this option should \nbe added. It is not good thing. It brings troubles and errors.\n\n2. Now we have two different options storages for in-core AM options and for \ncontib AM options. They are boolRelOpts, intRelOpts, realRelOpts, enumRelOpts, \nenumRelOpts arrays for in-core AMm's and static relopt_gen **custom_options \nfor contrib AM. It if better ho have same tool for both. My idea will allow to \nhave same code both in contrib and in-core AM.\n\n3. Then we will be able to have reloptions-like options just anywhere, just \ndefine an OptionDefSet, and put code that provides option values from SQL and \npg_catalog. I came to the idea of rewriting option code when I have been \nworking on the task of adding opclass parameters, Nikita Glukhov works on it \nnow. It is very simulator to options, but now you can not use reloptions code \nfor that. When we will have OptionDefSet I've described, it can be used for \nopclass parameters too.\n\nThe example of how it can be done can be found at https://\ncommitfest.postgresql.org/15/992/\nThe code is quite outdated, and has a lot of extra code that already commited \nor is not really needed. But you can see the idea.\n\nI will try to provide new version of it soon, with no extra code, that can be \napplied to current master, but it would be good to apply it to the master \nbranch where there is no StdRdOptions. This will make patch more readable and \nunderstandable.\n\nSo, can I please have it in the code :-)\n\n> > Because if I first copy AM solution from indexes to relation, then I will\n> > have to fix it in two places.\n> > \n> > So I would prefer to keep reloptions for relations in relations.c, only\n> > split them into HeapOptions and ToastOptions, then change AM for indexes\n> > moving option definition into AM's and then clone the solution for\n> > relations.\n> Then, for table AMs, it seems to me that you are right for long-term\n> perspective to have the toast-related options in reloptions.c, or\n> perhaps directly located within more toast-related file (?) as table\n> AMs interact with toast using heapam_relation_needs_toast_table and\n> such callbacks. So for heap, moving the option handling to roughly\n> heapam_handler.c is a natural move, though this requires a redesign of\n> the existing structure to use option handling closer to what\n> indoptions does, but for tables.\nWe also have view reloptions and attribute options, as well as toast options.\nWe can keep them in reloption.c or find better place for them. It is not a big \nproblem I think. And as for heap options, yes, as heap now has AM, they should \nbe moved inside AM. I can do it when we are finished with index options.\n\n\n\n\n",
"msg_date": "Sat, 07 Mar 2020 15:14:42 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "A new version of the patch.\nAutovacuum options were extended in b07642db\n\nSo I added that options to the current patch.",
"msg_date": "Sat, 28 Mar 2020 21:57:33 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "> On 28 Mar 2020, at 19:57, Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> \n> A new version of the patch.\n> Autovacuum options were extended in b07642db\n> \n> So I added that options to the current patch.\n\nThe heapam.c hunk in this version fails to apply to HEAD, can you please submit\na rebased version? Marking the CF entry as Waiting on Author in the meantime.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 2 Jul 2020 16:15:13 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "В письме от четверг, 2 июля 2020 г. 17:15:13 MSK пользователь Daniel \nGustafsson написал:\n\n> > A new version of the patch.\n> > Autovacuum options were extended in b07642db\n> > \n> > So I added that options to the current patch.\n> \n> The heapam.c hunk in this version fails to apply to HEAD, can you please\n> submit a rebased version? \nThanks for reminding about it.\n\nHere goes a rebased version.\n\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)",
"msg_date": "Fri, 03 Jul 2020 11:31:08 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nthank you for the patch. It applies cleanly, compiles and passes check, check-world.\r\n\r\nI feel as per the discussion, this is a step to the right direction yet it does not get far enough. From experience, I can confirm that dealing with reloptions in a new table AM is somewhat of a pain. Ultimately, reloptions should be handled by the table AM specific code. The current patch does not address the issue. Yet it does make the issue easier to address by clearing up the current state.\r\n\r\nIf you allow me, I have a couple of comments.\r\n\r\n-\tsaveFreeSpace = RelationGetTargetPageFreeSpace(relation,\r\n-\t\t\t\t\t\t\t\t\t\t\t\t HEAP_DEFAULT_FILLFACTOR);\r\n+\tif (IsToastRelation(relation))\r\n+\t\tsaveFreeSpace = ToastGetTargetPageFreeSpace();\r\n+\telse\r\n+\t\tsaveFreeSpace = HeapGetTargetPageFreeSpace(relation);\r\n\r\nFor balance, it does make some sense for ToastGetTargetPageFreeSpace() to get relation as an argument, similarly to HeapGetTargetPageFreeSpace().\r\nAlso, this pattern is repeated in four places, maybe the branch can be moved inside a macro or static inline instead?\r\n\r\n- /* Retrieve the parallel_workers reloption, or -1 if not set. */\r\n- rel->rel_parallel_workers = RelationGetParallelWorkers(relation, -1);\r\n+ /*\r\n+ * Retrieve the parallel_workers for heap and mat.view relations.\r\n+ * Use -1 if not set, or if we are dealing with other relation kinds\r\n+ */\r\n+ if (relation->rd_rel->relkind == RELKIND_RELATION ||\r\n+ relation->rd_rel->relkind == RELKIND_MATVIEW)\r\n+ rel->rel_parallel_workers = RelationGetParallelWorkers(relation, -1);\r\n+ else\r\n+ rel->rel_parallel_workers = -1;\r\n\r\nIf the comment above is agreed upon, then it makes a bit of sense to apply the same here. The expression in the branch is already asserted for in macro, why not switch there and remove the responsibility from the caller?\r\n\r\nAny thoughts on the above?\r\n\r\nCheers,\r\nGeorgios\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 20 Jul 2020 15:36:44 +0000",
"msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "В письме от понедельник, 20 июля 2020 г. 18:36:44 MSK пользователь Georgios \nKokolatos написал:\n\nHi! Sorry for really long delay, I was at my summer vacations, and then has \nurgent things to finish first. :-( Now I hope we can continue... \n\n\n> thank you for the patch. It applies cleanly, compiles and passes check,\n> check-world.\n\nThank you for reviewing efforts. \n \n> I feel as per the discussion, this is a step to the right direction yet it\n> does not get far enough. From experience, I can confirm that dealing with\n> reloptions in a new table AM is somewhat of a pain. Ultimately, reloptions\n> should be handled by the table AM specific code. The current patch does not\n> address the issue. Yet it does make the issue easier to address by clearing\n> up the current state.\n\nMoving reloptions to AM code is the goal I am slowly moving to. I've started \nsome time ago with big patch https://commitfest.postgresql.org/14/992/ and \nhave been told to split it into smaller parts. So I did, and this patch is \nlast step that cleans options related things up, and then actual moving can be \ndone.\n \n> If you allow me, I have a couple of comments.\n> \n> -\tsaveFreeSpace = RelationGetTargetPageFreeSpace(relation,\n> -\t\t\t\t\t\t\t\t\t\t\t\t HEAP_DEFAULT_FILLFACTOR);\n> +\tif (IsToastRelation(relation))\n> +\t\tsaveFreeSpace = ToastGetTargetPageFreeSpace();\n> +\telse\n> +\t\tsaveFreeSpace = HeapGetTargetPageFreeSpace(relation);\n> \n> For balance, it does make some sense for ToastGetTargetPageFreeSpace() to\n> get relation as an argument, similarly to HeapGetTargetPageFreeSpace().\n\nToastGetTargetPageFreeSpace return a const value. I've change the code, so it \ngets relation argument, that is not used, the way you suggested. But I am not \nsure if it is good or bad idea. May be we will get some \"Unused variable\" \nwarning on some compilers. I like consistency... But not sure we need it here. \n\n> - /* Retrieve the parallel_workers reloption, or -1 if not set. */\n> - rel->rel_parallel_workers = RelationGetParallelWorkers(relation,\n> -1);\n + /*\n> + * Retrieve the parallel_workers for heap and mat.view relations.\n> + * Use -1 if not set, or if we are dealing with other relation\n> kinds\n + */\n> + if (relation->rd_rel->relkind == RELKIND_RELATION ||\n> + relation->rd_rel->relkind == RELKIND_MATVIEW)\n> + rel->rel_parallel_workers =\n> RelationGetParallelWorkers(relation, -1);\n + else\n> + rel->rel_parallel_workers = -1;\n> Also, this pattern is repeated in four places, maybe the branch can be\n> moved inside a macro or static inline instead? \n\n> If the comment above is agreed upon, then it makes a bit of sense to apply\n> the same here. The expression in the branch is already asserted for in\n> macro, why not switch there and remove the responsibility from the caller?\n\nI guess here you are right, because here the logic is following: for heap \nrelation take option from options, for _all_ others use -1. This can be moved \nto macro.\n\nSo I changed it to \n\n/* \n * HeapGetParallelWorkers \n * Returns the heap's parallel_workers reloption setting. \n * Note multiple eval of argument! \n */ \n#define HeapGetParallelWorkers(relation, defaultpw) \\ \n (AssertMacro(relation->rd_rel->relkind == RELKIND_RELATION || \\ \n relation->rd_rel->relkind == RELKIND_MATVIEW), \\ \n (relation)->rd_options ? \\ \n ((HeapOptions *) (relation)->rd_options)->parallel_workers : \\ \n (defaultpw)) \n \n/* \n * RelationGetParallelWorkers \n * Returns the relation's parallel_workers reloption setting. \n * Note multiple eval of argument! \n */ \n \n#define RelationGetParallelWorkers(relation, defaultpw) \\ \n (((relation)->rd_rel->relkind == RELKIND_RELATION || \\ \n (relation)->rd_rel->relkind == RELKIND_MATVIEW) ? \\ \n HeapGetParallelWorkers(relation, defaultpw) : defaultpw)\n\n\nBut I would not like to move\n\n\tif (IsToastRelation(relation))\n\t\tsaveFreeSpace = ToastGetTargetPageFreeSpace(relation);\n\telse\n\t\tsaveFreeSpace = HeapGetTargetPageFreeSpace(relation);\n\ninto macros, as there is a choice only between heap and toast. All other \nrelation types are not mentioned.\n\nSo we can not call it RelationGetTargetPageFreeSpace. It would be \nToastOrHeapGetTargetPageFreeSpace actually. Better not to have such macro.\n\nPlease find new version of the patch in the attachment.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)",
"msg_date": "Sun, 13 Sep 2020 19:04:14 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "Hi Georgios,\n\nOn 9/13/20 12:04 PM, Nikolay Shaplov wrote:\n> В письме от понедельник, 20 июля 2020 г. 18:36:44 MSK пользователь Georgios\n> Kokolatos написал:\n> \n>> thank you for the patch. It applies cleanly, compiles and passes check,\n>> check-world.\n> \n> Thank you for reviewing efforts.\n\n<snip>\n\n> Please find new version of the patch in the attachment.\n\nAny thoughts on the updated patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 5 Mar 2021 09:00:45 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "On Sun, 2020-09-13 at 19:04 +0300, Nikolay Shaplov wrote:\n> Moving reloptions to AM code is the goal I am slowly moving to. I've\n> started \n> some time ago with big patch \n> https://commitfest.postgresql.org/14/992/ and \n> have been told to split it into smaller parts. So I did, and this\n> patch is \n> last step that cleans options related things up, and then actual\n> moving can be \n> done.\n\nThank you for working on this.\n\nCan you outline the plan for moving these options to the table AM to\nmake sure this patch is a step in the right direction?\n\nI was trying to work through this problem as well[1], and there are a\nfew complications.\n\n* Which options apply to any relation (of any table AM), and which\napply to only heaps? As far as I can tell, the only one that seems\nheap-specific is \"fillfactor\".\n\n* Toast tables can be any AM, as well, so if we accept new reloptions\nfor a custom AM, we also need to accept options for toast tables of\nthat AM.\n\n* Implementation-wise, the bytea representation of the options is not\nvery easy to extend. Should we have a new text field in the catalog to\nhold the custom options?\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/43c6ec161f930e385dbc3169a065a917cfc60714.camel%40j-davis.com\n\n\n\n",
"msg_date": "Thu, 03 Jun 2021 17:19:09 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "В письме от пятница, 4 июня 2021 г. 3:19:09 MSK пользователь Jeff Davis \nнаписал:\n\n> > Moving reloptions to AM code is the goal I am slowly moving to. I've\n> > started\n> > some time ago with big patch\n> > https://commitfest.postgresql.org/14/992/ and\n> > have been told to split it into smaller parts. So I did, and this\n> > patch is\n> > last step that cleans options related things up, and then actual\n> > moving can be\n> > done.\n> \n> Thank you for working on this.\nWelcome!\nSorry for slow reply, I am on summer vacations now, no chance for fast replies \nnow :-)\n\n> Can you outline the plan for moving these options to the table AM to\n> make sure this patch is a step in the right direction?\n\nYes, I can. First you can see the whole patch, the way it was several years \nago: https://commitfest.postgresql.org/15/992/, reloptions8a.diff\nThe things I would say is accurate for postgres ~11, it may have been changed \nsince I last payed attention to them.\n\nSo, there are three general places where options can be stored:\n1. Global boolRelOpts, intRelOpts, realRelOpts, stringRelOpts in src/backend/\naccess/common/reloptions.c for in-core access methods. \n\n2. custom_options array of accessable via add_bool_reloption, \nadd_int_reloption, add_real_reloption, add_string_reloption for access methods \nfrom contribs. (See reloptions.c too)\n\n3. And also each access method has an array of relopt_parse_elt[] that is \nalso about reloptions. \n\n1 and 2 are technically arrays of relopt_get, and store information what kind \nof options do we have.\n\n3 is array of relopt_parse_elt[] that store information how options should be \nstored into binary representation.\n\nMy idea was to merge relopt_get and relopt_parse_elt into one structure (in my \npatch it is \"option_definition_basic\"), each access method, that needs options, \nshould have a set of option_definition_basic for their options (called \noption_definition_basic in my patch, may be should be renamed)\n\nand this set of option_definitions is the only data that is needed to parse \noption into binary representation.\n\nSo in access method instead of am_option function we will have \namrelopt_catalog function that returns \"options defenition set\" for this am, \nand this definition set is used by option parser to parse options.\n\nSo, it my explanation is not quite clear, please ask questions, I will try to \nanswer them.\n\n> I was trying to work through this problem as well[1], and there are a\n> few complications.\n> \n> * Which options apply to any relation (of any table AM), and which\n> apply to only heaps? As far as I can tell, the only one that seems\n> heap-specific is \"fillfactor\".\n\nFrom my point of view, each relation kind has it's own set of options.\nThe fact, that almost every kind has a fillfactor is just a coincidence.\nIf we try to do some optimization here, we will be buried under the complexity \nof it soon. So they are _different_ options just having the same name.\n\n> * Toast tables can be any AM, as well, so if we accept new reloptions\n> for a custom AM, we also need to accept options for toast tables of\n> that AM.\n\nWhen I wrote this patch, AM was introduced only to index relations. \nI do not how it is implemented for heap now, but there should be some logic in \nit. If toast tables has it's own AM, then option definition set should be \nstored there, and we should find a way to work with it, somehow.\n\n> * Implementation-wise, the bytea representation of the options is not\n> very easy to extend. Should we have a new text field in the catalog to\n> hold the custom options?\n\nI am not really understanding this question.\n\nNormally all options can be well represented as binary structure stored at \nbytea. I see no problem here. If we need some unusual behaviour, we can use \nstring option with custom validation function. This should cover almost all \nneeds I can imagine.\n\n=======\n\nSo it you are interested in having better option implementation, and has no \nideas of your own, I would suggest to revive my patch and try to commit it.\nWhat I need first of all is a reviewer. Testing and coauthoring will also be \napriciated.\n\nMy original big patch, I gave you link to, have been split into several parts.\nThe last minor part, that should be commit in advance, and have not been \ncommit yet is https://commitfest.postgresql.org/33/2370/\nIf you join as a reviewer this would be splendid! :-)\n\n-- \nNikolay Shaplov \ndhyan@nataraj.su (e-mail, jabber) \n@dhyan:nataraj.su (matrix)\n\n\n\n\n",
"msg_date": "Sun, 20 Jun 2021 17:03:15 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "On Sun, Sep 13, 2020 at 9:34 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n>\n> В письме от понедельник, 20 июля 2020 г. 18:36:44 MSK пользователь Georgios\n> Kokolatos написал:\n>\n> Hi! Sorry for really long delay, I was at my summer vacations, and then has\n> urgent things to finish first. :-( Now I hope we can continue...\n>\n>\n> > thank you for the patch. It applies cleanly, compiles and passes check,\n> > check-world.\n>\n> Thank you for reviewing efforts.\n>\n> > I feel as per the discussion, this is a step to the right direction yet it\n> > does not get far enough. From experience, I can confirm that dealing with\n> > reloptions in a new table AM is somewhat of a pain. Ultimately, reloptions\n> > should be handled by the table AM specific code. The current patch does not\n> > address the issue. Yet it does make the issue easier to address by clearing\n> > up the current state.\n>\n> Moving reloptions to AM code is the goal I am slowly moving to. I've started\n> some time ago with big patch https://commitfest.postgresql.org/14/992/ and\n> have been told to split it into smaller parts. So I did, and this patch is\n> last step that cleans options related things up, and then actual moving can be\n> done.\n>\n> > If you allow me, I have a couple of comments.\n> >\n> > - saveFreeSpace = RelationGetTargetPageFreeSpace(relation,\n> > - HEAP_DEFAULT_FILLFACTOR);\n> > + if (IsToastRelation(relation))\n> > + saveFreeSpace = ToastGetTargetPageFreeSpace();\n> > + else\n> > + saveFreeSpace = HeapGetTargetPageFreeSpace(relation);\n> >\n> > For balance, it does make some sense for ToastGetTargetPageFreeSpace() to\n> > get relation as an argument, similarly to HeapGetTargetPageFreeSpace().\n>\n> ToastGetTargetPageFreeSpace return a const value. I've change the code, so it\n> gets relation argument, that is not used, the way you suggested. But I am not\n> sure if it is good or bad idea. May be we will get some \"Unused variable\"\n> warning on some compilers. I like consistency... But not sure we need it here.\n>\n> > - /* Retrieve the parallel_workers reloption, or -1 if not set. */\n> > - rel->rel_parallel_workers = RelationGetParallelWorkers(relation,\n> > -1);\n> + /*\n> > + * Retrieve the parallel_workers for heap and mat.view relations.\n> > + * Use -1 if not set, or if we are dealing with other relation\n> > kinds\n> + */\n> > + if (relation->rd_rel->relkind == RELKIND_RELATION ||\n> > + relation->rd_rel->relkind == RELKIND_MATVIEW)\n> > + rel->rel_parallel_workers =\n> > RelationGetParallelWorkers(relation, -1);\n> + else\n> > + rel->rel_parallel_workers = -1;\n> > Also, this pattern is repeated in four places, maybe the branch can be\n> > moved inside a macro or static inline instead?\n>\n> > If the comment above is agreed upon, then it makes a bit of sense to apply\n> > the same here. The expression in the branch is already asserted for in\n> > macro, why not switch there and remove the responsibility from the caller?\n>\n> I guess here you are right, because here the logic is following: for heap\n> relation take option from options, for _all_ others use -1. This can be moved\n> to macro.\n>\n> So I changed it to\n>\n> /*\n> * HeapGetParallelWorkers\n> * Returns the heap's parallel_workers reloption setting.\n> * Note multiple eval of argument!\n> */\n> #define HeapGetParallelWorkers(relation, defaultpw) \\\n> (AssertMacro(relation->rd_rel->relkind == RELKIND_RELATION || \\\n> relation->rd_rel->relkind == RELKIND_MATVIEW), \\\n> (relation)->rd_options ? \\\n> ((HeapOptions *) (relation)->rd_options)->parallel_workers : \\\n> (defaultpw))\n>\n> /*\n> * RelationGetParallelWorkers\n> * Returns the relation's parallel_workers reloption setting.\n> * Note multiple eval of argument!\n> */\n>\n> #define RelationGetParallelWorkers(relation, defaultpw) \\\n> (((relation)->rd_rel->relkind == RELKIND_RELATION || \\\n> (relation)->rd_rel->relkind == RELKIND_MATVIEW) ? \\\n> HeapGetParallelWorkers(relation, defaultpw) : defaultpw)\n>\n>\n> But I would not like to move\n>\n> if (IsToastRelation(relation))\n> saveFreeSpace = ToastGetTargetPageFreeSpace(relation);\n> else\n> saveFreeSpace = HeapGetTargetPageFreeSpace(relation);\n>\n> into macros, as there is a choice only between heap and toast. All other\n> relation types are not mentioned.\n>\n> So we can not call it RelationGetTargetPageFreeSpace. It would be\n> ToastOrHeapGetTargetPageFreeSpace actually. Better not to have such macro.\n>\n> Please find new version of the patch in the attachment.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 17:39:12 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
},
{
"msg_contents": "В письме от среда, 14 июля 2021 г. 15:09:12 MSK пользователь vignesh C \nнаписал:\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nThank you for notification. \n\nI've tried to rebase it and found out that some options have been added to \npartitioned table.\nHandling this needs careful approach, and I will fix it two weeks later, when I \nam back from vacations.\n\n\nMeanwhile I would strongly suggest to change\n\n{\"vacuum_index_cleanup\", RELOPT_TYPE_BOOL,\n\nto \n\n{\"vacuum_index_cleanup\", RELOPT_TYPE_ENUM,\n\nin src/backend/access/common/reloptions.c\n\nThis change should be done in 3499df0d\nBut current implementation of reloptions is very error prone , and it is very \neasy to miss this part.\n\n\n\n-- \nNikolay Shaplov \ndhyan@nataraj.su (e-mail, jabber) \n@dhyan:nataraj.su (matrix)\n\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 11:30:51 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Finally split StdRdOptions into HeapOptions and\n ToastOptions"
}
] |
[
{
"msg_contents": "Hi,\n\n[1] made me notice these issues. The issues here are mostly independent,\nbut it's still worthwhile to read that thread - in particular because my\nproposed solution for the problem is possibly somewhat related to this\nissue. And, if we were to go for my more extreme proposal below, the fix\nfor this issue would probably also fix [1].\n\n\nThere's several smgr operations (e.g. smgrtruncate(), smgrdounlinkall())\nthat first do a a DropRelFileNodesAllBuffers() and then perform the\nfilesystem operation. Without a critical section.\n\nAs far as I can tell that's not acceptable.\n\nUsing smgrtruncate() as an example (because it's executed from backends\n/ autovacuum rather than e.g. checkpointer, and because it's targeted at\nrelations that live past the current transaction):\n\nDropRelFileNodeBuffers() throws away valid buffer contents. Therefore,\nwe'll be in a corrupt state should smgr_truncate fail. The on-disk state\nwill be the old file contents, but the in-memory state will not reflect\nthat. Consider e.g. the case that autovacuum pruned away all tuples from\na relation. That will result in pg_class.relfrozenxid being updated,\nreflecting the new horizon. If we then do a DropRelFileNodeBuffers()\nthrowing away the modified buffer contents, but subsequently fail to\ntruncate the underlying file, we'll potentially have a lot of pages full\nof tuples referencing xids that are older than relfrozenxid. Which scans\nwill try to return, as the underlying file is the original size.\n\nAs far as I can tell it, as things stand, is simply never ok to do\nDropRelFileNodeBuffers() - when there potentially are dirty pages -\noutside of a critical section.\n\n\nRelationTruncate() notes:\n\t/*\n\t * We WAL-log the truncation before actually truncating, which means\n\t * trouble if the truncation fails. If we then crash, the WAL replay\n\t * likely isn't going to succeed in the truncation either, and cause a\n\t * PANIC. It's tempting to put a critical section here, but that cure\n\t * would be worse than the disease. It would turn a usually harmless\n\t * failure to truncate, that might spell trouble at WAL replay, into a\n\t * certain PANIC.\n\t */\n\nbut I think this analysis is quite insufficient. As far as I can tell it\ndoesn't consider the issue outlined above, nor does it consider what\nwill happen if standbys/PITR will replay the WAL records, but the\nprimary considers the relation to be of a different length.\n\n\nIt seems to me that we either need to\n\na) write out all dirty buffers during DropRelFileNodeBuffers(), thereby\n preventing the issue of old page contents \"coming back to live\" after\n a failed truncation.\nb) accept using a critical section, with the obvious consequence that\n failing to truncate would lead to a PANIC\nc) use a more complex protocol to invalidate buffers, ensuring there's\n no inconsistency between fs and shared_buffers.\n\n\nc) could e.g. be something like\n\n1) mark all buffers as about-to-be-dropped\n2) CacheInvalidateSmgr()\n3) truncate on filesystem level\n4a) if that fails, remove the about-to-be-dropped flags, in a PG_CATCH block\n4b) if that succeeds, fully remove the buffers to be dropped\n\nWhen another backend wants to use a buffer marked as about-to-be-dropped\nit would need to wait for that operation to finish (this afaict could\nneither work the way StartBufferIO() does, nor the way\nLockBufferForCleanup() does).\n\nSuch a scheme would have the advantage that truncation etc would behave\na lot more like normal buffer modifications. There'd e.g. afterwards be\nan interlock between truncations and BufferSync() / checkpoints, which\nimo is quite attractive.\n\nOn the other hand, the cost of DropRelFileNodeBuffers() needing to\nperform an exhaustive search of shared_buffers, would be quite\npainful. The single DropRelFileNodeBuffers() call is already bad\nenough. I guess we could amortize that slightly, by caching the buffer\nlocations for a limited number of buffers - in many cases where the\nsearch cost are problematic there won't be too many pages for individual\nrelations present.\n\n\nFor me alternative a) is prohibitively expensive, and not worth\ndiscussing much. I think I can live with b), but I know that I'm much\nless concerned with PANICs in these types of situations than others. c)\nseems worth investigating, but presumably would end up being too\ncomplicated to backpatch.\n\n\nI think several DropRelFileNodeBuffers() callers besides vacuum are a\nbit less concerning. E.g. when dropping a relation we do so as part of a\ncheckpoint, which'll trigger a PANIC IIRC. And the in-place truncation\ncase for TRUNCATE IIRC only applies to relations created in the same\nsubtransaction, which makes the failure scenario above largely moot IIRC.\n\n\nTom, I seem to recall a recent thread of yours discussing a different\napproach to truncation. I wonder if there's some intersection with\nthat. But unfortunately my search somehow has come up with nothing so\nfar - do you remember enough to find the thread?\n\nComments?\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20191206230640.2dvdjpcgn46q3ks2%40alap3.anarazel.de\n\n\n",
"msg_date": "Fri, 6 Dec 2019 16:12:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "smgr vs DropRelFileNodeBuffers() vs filesystem state vs no critical\n section"
},
{
"msg_contents": "On Sat, Dec 7, 2019 at 5:42 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Tom, I seem to recall a recent thread of yours discussing a different\n> approach to truncation. I wonder if there's some intersection with\n> that. But unfortunately my search somehow has come up with nothing so\n> far - do you remember enough to find the thread?\n>\n\nIIUC, a similar problem is discussed in the thread [1] where Tom\nproposed a few solutions which are close to what you are proposing.\n\n[1] - https://www.postgresql.org/message-id/1880.1281020817%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Dec 2019 11:07:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: smgr vs DropRelFileNodeBuffers() vs filesystem state vs no\n critical section"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-07 11:07:04 +0530, Amit Kapila wrote:\n> On Sat, Dec 7, 2019 at 5:42 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Tom, I seem to recall a recent thread of yours discussing a different\n> > approach to truncation. I wonder if there's some intersection with\n> > that. But unfortunately my search somehow has come up with nothing so\n> > far - do you remember enough to find the thread?\n> >\n> \n> IIUC, a similar problem is discussed in the thread [1] where Tom\n> proposed a few solutions which are close to what you are proposing.\n> \n> [1] - https://www.postgresql.org/message-id/1880.1281020817%40sss.pgh.pa.us\n\nIt does indeed look like basically the same problem. I was actually\nremembering a different thread, that was more about truncating with not\nquite as heavyweight locking.\n\nHaving pondered this and some related problems (truncation, flushing\nmultiple buffers at once using asynchronous IO, PrefetchBuffer()\ndirectly into buffers, cleanup lock implementation), I think we\nbasically need the ability to set something like BM_IO_IN_PROGRESS on\nmultiple buffers (perhaps signalling different forms of IO with\ndifferent flags, but I'm not sure it's needed).\n\nI think we basically ought to replace the current IO buffer locking with\ncondition variables (as Robert has suggested at [1]). Instead of having\nan array of LWLocks (BufferIOLWLockArray), we'd allocate one condition\nvariable for each buffer. I think we have enough space directly in\nBufferDesc these days, due to the spinlock removal, but that seems like\na detail.\n\nFor truncation, we'd first iterate over all buffers once to mark them as\nBM_IO_IN_PROGRESS, then we would truncate the underlying file. If the\ntruncation succeeds, we can use a local palloc'd array of IO_IN_PROGRESS\nbuffers to actually evict them. If the truncation fails, the same array\nwould be used to reset IO_IN_PROGRESS (basically AbortBufferIO, except\nnot setting BM_IO_ERROR for the truncation case). This would solve the\nproblem of truncations leading to diverging fs/buffer state, would not\nrequire a PANIC, and would allow to have concurrent buffer eviction to\nefficiently wait for IO to finish.\n\nThis would pretty directly replace the current cleanup locks, which\nwould just need the existing flagbit to indicate that refcount=0 should\ncause a condition variable wakeup.\n\nDoes somebody see a fundamental hole in this approach?\n\nObviously there's lots of different details to fill in, but most of them\nseem likely to only be found by actually writing the patch...\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CA%2BTgmoaj2aPti0yho7FeEf2qt-JgQPRWb0gci_o1Hfr%3DC56Xng%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 22 Dec 2019 16:54:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: smgr vs DropRelFileNodeBuffers() vs filesystem state vs no\n critical section"
}
] |
[
{
"msg_contents": "Jeff said:\nhttps://www.postgresql.org/message-id/CAMkU%3D1zBJNVo2DGYBgLJqpu8fyjCE_ys%2Bmsr6pOEoiwA7y5jrA%40mail.gmail.com\n|What would I find very useful is a verbosity option to get the cost\n|estimates expressed as a multiplier of each *_cost parameter, rather than\n|just as a scalar.\n\nI guess the goal is something like\nEXPLAIN(COSTS, VERBOSE) -- or some new option?\n..would show something like\nSeq Scan on public.sites (cost=0.00..2.90 rows=160 width=107)\n Total tosts: Seq page: 1.01 Random page: 1.23 CPU tuple: .05 CPU oper: .01\n Startup cost: [...]\n\nIt seems to me that's \"just\" a matter of redefining Cost and fixing everything that breaks:\n\nstruct Cost {\n double seq, rand;\n double cpu_tuple, cpu_index_tuple, cpu_oper;\n double parallel_setup; // This is probably always in startup_cost and never in run_cost\n\tdouble parallel_tuple; // This is probably always in run_cost and never in startup_cost\n double disable;\n};\n\nI'm perhaps 50% done with that - is there some agreement that's a desirable\ngoal and a good way to do it ?\n\nTo give an idea what I'm doing, there's a bunch of stuff like this:\n\n- if (path1->startup_cost < path2->startup_cost)\n+ if (cost_asscalar(&path1->startup_cost) < cost_asscalar(&path2->startup_cost))\n\n- qual_arg_cost += index_qual_cost.startup + index_qual_cost.per_tuple;\n+ cost_add2(&qual_arg_cost, &index_qual_cost.startup, &index_qual_cost.per_tuple);\n\n- if (cost.per_tuple > 10 * cpu_operator_cost)\n+ if (cost_isgt_scalar(&cost.per_tuple, 10 * cpu_operator_cost))\n\nAnd a great deal of stuff like this:\n\n- run_cost += cpu_run_cost;\n+ cost_add(&run_cost, &cpu_run_cost);\n \n /* tlist eval costs are paid per output row, not per tuple scanned */\n- startup_cost += path->pathtarget->cost.startup;\n- run_cost += path->pathtarget->cost.per_tuple * path->rows;\n+ cost_add(&startup_cost, &path->pathtarget->cost.startup);\n+ cost_add_mul(&run_cost, &path->pathtarget->cost.per_tuple, path->rows);\n \n path->startup_cost = startup_cost;\n- path->total_cost = startup_cost + run_cost;\n+ cost_set_sum2(&path->total_cost, &startup_cost, &run_cost);\n\n\nAs I've written it, that's somewhat different from Jeff's suggestion, as all\nthe entries in my struct are in units of cost. That seems easier due to (for\nexample) per-tablespace IO costs.\n\nI'd rather know sooner than later if there's a better way.\n\nJustin\n\n\n",
"msg_date": "Sat, 7 Dec 2019 03:10:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "verbose cost estimate"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Jeff said:\n>> |What would I find very useful is a verbosity option to get the cost\n>> |estimates expressed as a multiplier of each *_cost parameter, rather than\n>> |just as a scalar.\n\n> It seems to me that's \"just\" a matter of redefining Cost and fixing everything that breaks:\n\n> struct Cost {\n> double seq, rand;\n> double cpu_tuple, cpu_index_tuple, cpu_oper;\n> double parallel_setup; // This is probably always in startup_cost and never in run_cost\n> \tdouble parallel_tuple; // This is probably always in run_cost and never in startup_cost\n> double disable;\n> };\n\n> I'm perhaps 50% done with that - is there some agreement that's a desirable\n> goal and a good way to do it ?\n\nNo, I think this will get rejected out of hand. The implications for\nthe planner's speed and memory consumption seem quite unacceptable\nfor the size of the benefit. What you're showing above probably\ndoubles the size of most Paths, and the added cycles in hot-spots\nlike add_path seem pretty daunting.\n\nWe had earlier discussions about just breaking out the disable_cost,\nand even that didn't look very promising as a practical matter :-(.\nNobody is willing to give up even small slowdowns in everyday\nplanning speed for corner-case needs like these.\n\nOne idea that would alleviate some of the problems is to keep the\nnet cost as-is, and just add a separate struct of broken-down\ncost. Then, for example, add_path doesn't change at all. But\nthis makes the memory consumption angle even worse.\n\nLike Jeff, I've occasionally wished for info like this. But not\noften, and not hard enough that I think the cost of getting it\nwould be justified.\n\nSomething that might be useful when you do want this info is to\nchange one of the cost parameters by some small delta, rerun the\nplan, and see how much the total cost changes; that gives you a\nlocal slope of the sensitivity function. Repeat as needed for\nother cost parameters. The tedious part is probably verifying\nthat the shape of the plan didn't change (else the cost comparison\nisn't telling you what you want). Perhaps building a tool\nto automate that idea would be worthwhile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Dec 2019 11:34:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "+1, adding that sort of structure to Cost would get rejected out of hand.\n\nhowever, having a 'disabled' bit be part of the cost structure is something\nthat I would support. This has been discussed previously, but even adding\none bit to Cost doesn't have everyone's support. The purpose of a disabled\nbit would be to distinguish plans that had no disable_cost added to them\nfrom plans that did so that the planner can choose the minimum cost\nnon-disabled plan, if any such plan exists, or choose the minimum cost plan\notherwise. A disable count could be used, but even a bool would probably\nsuffice.\n\nthank you,\n\n /Jim F\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 9 Dec 2019 08:38:52 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "Jim Finnerty <jfinnert@amazon.com> writes:\n> +1, adding that sort of structure to Cost would get rejected out of hand.\n> however, having a 'disabled' bit be part of the cost structure is something\n> that I would support. This has been discussed previously, but even adding\n> one bit to Cost doesn't have everyone's support. The purpose of a disabled\n> bit would be to distinguish plans that had no disable_cost added to them\n> from plans that did so that the planner can choose the minimum cost\n> non-disabled plan, if any such plan exists, or choose the minimum cost plan\n> otherwise. A disable count could be used, but even a bool would probably\n> suffice.\n\nIf we did go that route, I think a disable count would be the right thing.\nIt wouldn't take up any more space than a bool, probably, once you account\nfor padding overhead. And the code impact in hotspots like add_path would\nbe just about the same too. The advantage of a count is that, for\nexample, if you have enable_seqscan off then a path containing three\nseqscans could be considered inferior to one with two; but if we have\nonly a bool then we can't tell the difference.\n\n(Having said that, I'm still about -0.5 or so on the idea. But if\nwe do it, we should do a count.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 11:20:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 11:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we did go that route, I think a disable count would be the right thing.\n> It wouldn't take up any more space than a bool, probably, once you account\n> for padding overhead. And the code impact in hotspots like add_path would\n> be just about the same too. The advantage of a count is that, for\n> example, if you have enable_seqscan off then a path containing three\n> seqscans could be considered inferior to one with two; but if we have\n> only a bool then we can't tell the difference.\n\nI'm not sure that I buy the idea that a disable count wouldn't take up\nany more space. A Boolean could even be represented as a flag inside\nof a bitmask, taking up just one bit. But even if you used a whole\nbyte for it, in the long term, that's going to be cheaper; people\naround here are not blind to the value of filling in holes left by\npadding.\n\nI do agree that an integer would give us more accurate planning. The\nquestion in my mind is whether we care. It's not crazy to say that\ndisabling is more for testing than real use, that it's best effort,\nand that once we give up on it, we give up completely -- which would\nmake a bool sufficient. Now the contrary position that we want to be\nmore accurate than that is not crazy either, and it's easy to see why\nthat would be more convenient with a complex plan.\n\nBut the real issue there, in my view, is that there's no way to\ndisable certain kinds of plans for just part of a query. Nor is there\nany way to politely inform the planner that its idea of how many rows\na certain scan or join will return is bollocks, and let it know the\nreal number. There's just no way at all - except in limited cases,\nsome unprincipled hacks - to give the planner that kind of guidance,\nsuggestion, recommendation, urging, advice, clue, inkling, indicator,\nor, you know, whatever other word we could use to describe that sort\nof thing. So we're left with crude tools that affect the whole query.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Dec 2019 11:55:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "On Sat, Dec 07, 2019 at 11:34:12AM -0500, Tom Lane wrote:\n>Justin Pryzby <pryzby@telsasoft.com> writes:\n>> Jeff said:\n>>> |What would I find very useful is a verbosity option to get the cost\n>>> |estimates expressed as a multiplier of each *_cost parameter, rather than\n>>> |just as a scalar.\n>\n>> It seems to me that's \"just\" a matter of redefining Cost and fixing everything that breaks:\n>\n>> struct Cost {\n>> double seq, rand;\n>> double cpu_tuple, cpu_index_tuple, cpu_oper;\n>> double parallel_setup; // This is probably always in startup_cost and never in run_cost\n>> \tdouble parallel_tuple; // This is probably always in run_cost and never in startup_cost\n>> double disable;\n>> };\n>\n>> I'm perhaps 50% done with that - is there some agreement that's a desirable\n>> goal and a good way to do it ?\n>\n>No, I think this will get rejected out of hand. The implications for\n>the planner's speed and memory consumption seem quite unacceptable\n>for the size of the benefit. What you're showing above probably\n>doubles the size of most Paths, and the added cycles in hot-spots\n>like add_path seem pretty daunting.\n>\n\nYeah, that's an issue. But I have to admit my main issue with this\nproposal is that I have no idea how I'd interpret this Cost. I mean,\nwhat do the fields express for different types of paths? How do they\ncontribute to the actual cost of that path?\n\nWhat I regularly wish to know the parts of the cost for individual\npaths: how much is the I/O (and maybe some extra bits about caching,\nrandom and sequential I/O), cost of quals/functions, and so on. But this\ninfo is inherently path-specific, it makes little sense to include that\ninto the regular Path struct. Perhaps a path-specific struct, referenced\nfrom the path and built only with verbose explain would be fine?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 9 Dec 2019 23:14:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "On Mon, 9 Dec 2019 at 17:14, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sat, Dec 07, 2019 at 11:34:12AM -0500, Tom Lane wrote:\n> >Justin Pryzby <pryzby@telsasoft.com> writes:\n> >> Jeff said:\n> >>> |What would I find very useful is a verbosity option to get the cost\n> >>> |estimates expressed as a multiplier of each *_cost parameter, rather than\n> >>> |just as a scalar.\n> >\n> >> It seems to me that's \"just\" a matter of redefining Cost and fixing everything that breaks:\n> >\n> >> struct Cost {\n> >> double seq, rand;\n> >> double cpu_tuple, cpu_index_tuple, cpu_oper;\n> >> double parallel_setup; // This is probably always in startup_cost and never in run_cost\n> >> double parallel_tuple; // This is probably always in run_cost and never in startup_cost\n> >> double disable;\n> >> };\n> >\n> >> I'm perhaps 50% done with that - is there some agreement that's a desirable\n> >> goal and a good way to do it ?\n> >\n> >No, I think this will get rejected out of hand. The implications for\n> >the planner's speed and memory consumption seem quite unacceptable\n> >for the size of the benefit. What you're showing above probably\n> >doubles the size of most Paths, and the added cycles in hot-spots\n> >like add_path seem pretty daunting.\n> >\n>\n> Yeah, that's an issue. But I have to admit my main issue with this\n> proposal is that I have no idea how I'd interpret this Cost. I mean,\n> what do the fields express for different types of paths? How do they\n> contribute to the actual cost of that path?\n\nWhat I think users would be able to do with this info is understand\nwhich parameter to tweak to raise the estimated cost of the node.\n\nEveryone knows if you see a index scan is being used but is taking\nlonger than a sequential scan then you might try raising\nrandom_page_cost. But I rarely see people tweaking the more \"exotic\"\nparameters like operator_tuple_cost or index_tuple_cost and when they\ndo they aren't really sure what nodes they're affecting...\n\nI remember planning to do a very similar thing back in the 8.3 era and\nnever getting around to it. You could imaging even storing these for\nthe overall plan in the logs and building a large matrix of actual\nexecution values versus these broken out individual costs. Then it\nbecomes a standard linear optimization problem to find the optimal\nvalues for each parameter to minimize inaccurate plan estimates (and\nto identify cases where there are outliers).\n\n-- \ngreg\n\n\n",
"msg_date": "Mon, 9 Dec 2019 17:27:01 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> ... Perhaps a path-specific struct, referenced\n> from the path and built only with verbose explain would be fine?\n\nHow would that work, given that the planner doesn't know whether its\noutput is going to get explained? With features like the plan cache\nand auto_explain in mind, it's very hard to see how you avoid having\nto save the information always.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 17:40:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "On Mon, Dec 09, 2019 at 05:27:01PM -0500, Greg Stark wrote:\n>On Mon, 9 Dec 2019 at 17:14, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sat, Dec 07, 2019 at 11:34:12AM -0500, Tom Lane wrote:\n>> >Justin Pryzby <pryzby@telsasoft.com> writes:\n>> >> Jeff said:\n>> >>> |What would I find very useful is a verbosity option to get the cost\n>> >>> |estimates expressed as a multiplier of each *_cost parameter, rather than\n>> >>> |just as a scalar.\n>> >\n>> >> It seems to me that's \"just\" a matter of redefining Cost and fixing everything that breaks:\n>> >\n>> >> struct Cost {\n>> >> double seq, rand;\n>> >> double cpu_tuple, cpu_index_tuple, cpu_oper;\n>> >> double parallel_setup; // This is probably always in startup_cost and never in run_cost\n>> >> double parallel_tuple; // This is probably always in run_cost and never in startup_cost\n>> >> double disable;\n>> >> };\n>> >\n>> >> I'm perhaps 50% done with that - is there some agreement that's a desirable\n>> >> goal and a good way to do it ?\n>> >\n>> >No, I think this will get rejected out of hand. The implications for\n>> >the planner's speed and memory consumption seem quite unacceptable\n>> >for the size of the benefit. What you're showing above probably\n>> >doubles the size of most Paths, and the added cycles in hot-spots\n>> >like add_path seem pretty daunting.\n>> >\n>>\n>> Yeah, that's an issue. But I have to admit my main issue with this\n>> proposal is that I have no idea how I'd interpret this Cost. I mean,\n>> what do the fields express for different types of paths? How do they\n>> contribute to the actual cost of that path?\n>\n>What I think users would be able to do with this info is understand\n>which parameter to tweak to raise the estimated cost of the node.\n>\n>Everyone knows if you see a index scan is being used but is taking\n>longer than a sequential scan then you might try raising\n>random_page_cost. But I rarely see people tweaking the more \"exotic\"\n>parameters like operator_tuple_cost or index_tuple_cost and when they\n>do they aren't really sure what nodes they're affecting...\n>\n\nWell, but that's kinda my point - how would you know that you need to\nincrease random_page_cost, or how big influence it has? The total is a\nfairly non-trivial combination of various cost parameters, effective\ncache size etc. Maybe I just don't understand how the cost is split into\nthose pieces, named the same as the cost GUCs ...\n\n>I remember planning to do a very similar thing back in the 8.3 era and\n>never getting around to it. You could imaging even storing these for\n>the overall plan in the logs and building a large matrix of actual\n>execution values versus these broken out individual costs. Then it\n>becomes a standard linear optimization problem to find the optimal\n>values for each parameter to minimize inaccurate plan estimates (and\n>to identify cases where there are outliers).\n>\n\nMaybe, but that's for one query. If you do this for many queries, the\nresults may be easily contradicting, no?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 10 Dec 2019 00:25:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "On Mon, Dec 09, 2019 at 05:40:40PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> ... Perhaps a path-specific struct, referenced\n>> from the path and built only with verbose explain would be fine?\n>\n>How would that work, given that the planner doesn't know whether its\n>output is going to get explained? With features like the plan cache\n>and auto_explain in mind, it's very hard to see how you avoid having\n>to save the information always.\n>\n\nI don't know, but my assumption is that this information would be needed\nonly very rarely. So maybe we could pass a flag enabling this to the\nplanner when executed from explain, and disable storing the plan in the\nplan cache, or something. And the additional info would be only\navailable when explicitly requested using an extra EXPLAIN option.\n\nSo for example auto_explain would not really show this (or it might get\nan extra option, with additional overhead).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 10 Dec 2019 00:31:39 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose cost estimate"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 12:25:46AM +0100, Tomas Vondra wrote:\n> >Everyone knows if you see a index scan is being used but is taking\n> >longer than a sequential scan then you might try raising\n> >random_page_cost. But I rarely see people tweaking the more \"exotic\"\n> >parameters like operator_tuple_cost or index_tuple_cost and when they\n> >do they aren't really sure what nodes they're affecting...\n> >\n> \n> Well, but that's kinda my point - how would you know that you need to\n> increase random_page_cost, or how big influence it has? The total is a\n> fairly non-trivial combination of various cost parameters, effective\n> cache size etc. Maybe I just don't understand how the cost is split into\n> those pieces, named the same as the cost GUCs ...\n\nEverything which right now does:\n|cost += something*random_page_cost\n..ends up (via a macro):\ncost.random_page_cost += random_page_cost\n\nAnd everything which does:\n|cost1 += cost2\n..ends up doing the same for each of the component members.\n\n99% of this falls into place trivially.\n\nI'm attaching a patch which is perhaps 95% working; various plans have changed,\nso I gather there's at least a few bugs.\n\nThere's probably a few things which could be improved:\nProbably some \"Costs\" should be simple doubles if they're only ever multiples\nof a single cost parameter.\nMaybe someone will say that Cost should be a typedef to a struct* rather than a struct.\nMaybe I should get rid of cost_islt/isgt and just use cost_asscalar().\nSeems like parallel_setup_cost and disable_cost could be booleans.\n\nJustin",
"msg_date": "Mon, 9 Dec 2019 17:57:33 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: verbose cost estimate"
}
] |
[
{
"msg_contents": "Hi, Hackers.\n\nI propose a small improvement to the psql command.\nCurrently, psql's help/quit/exit command needs to start from the first position of the prompt.\nFor example, if you write a command after a space, the continuation prompt\n(PROMPT2) is displayed.\n\n---\npostgres=> \\set PROMPT2 'continue=>'\npostgres=> <SPACE>help\ncontinue=>\n---\n\nThe attached patch allows the command to be executed ignoring leading white spaces.\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Sat, 7 Dec 2019 13:23:46 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "psql small improvement patch"
},
{
"msg_contents": "\n\nOn 12/7/19 5:23 AM, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n> Hi, Hackers.\n> \n> I propose a small improvement to the psql command.\n> Currently, psql's help/quit/exit command needs to start from the first position of the prompt.\n> For example, if you write a command after a space, the continuation prompt\n> (PROMPT2) is displayed.\n> \n> ---\n> postgres=> \\set PROMPT2 'continue=>'\n> postgres=> <SPACE>help\n> continue=>\n> ---\n> \n> The attached patch allows the command to be executed ignoring leading white spaces.\n\nThank you for the patch. I took a look, and have two concerns.\n\nThe smaller concern is that psql uses isspace() to determine whether a\ncharacter is whitespace in the current locale, so instead of using\nstrspn(line, \" \\t\") you might want to use isspace() and be consistent.\nOn the other hand, there is precedent in other files for what you are\ndoing, such as in src/fe_utils/print.c:\n\n /* is string only whitespace? */\n if ((*ptr)[strspn(*ptr, \" \\t\")] == '\\0')\n\n\nMy larger concern is that people may be using space before a command\nword to avoid having it be interpreted as a command. Take for example\nthe following contrived psql interaction:\n\n\n mark=# create table help (str text);\n CREATE TABLE\n mark=# insert into help (str) values ('foo');\n INSERT 0 1\n mark=# select * from\n mark-# help\n mark-# where str is not null;\n str\n -----\n foo\n (1 row)\n\nIn the current unpatched psql, if I don't indent the second line, I\nget cruft in the output:\n\n mark=# select * from\n mark-# help\n Use \\? for help or press control-C to clear the input buffer.\n mark-# where str is not null;\n str\n -----\n foo\n (1 row)\n\nPatching psql as you propose would result in that cruft whether or\nnot I indent the second line. We would need to consider if that\nbehavior change is going to cause more problems for users than your\npatch is worth. How common is this problem you are trying to fix?\n\n\n--\nMark Dilger\n\n\n",
"msg_date": "Sat, 7 Dec 2019 07:19:30 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql small improvement patch"
},
{
"msg_contents": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\" <noriyoshi.shinoda@hpe.com> writes:\n> I propose a small improvement to the psql command.\n> Currently, psql's help/quit/exit command needs to start from the first position of the prompt.\n> The attached patch allows the command to be executed ignoring leading white spaces.\n\nPlease read the (very long) thread in which this behavior was designed:\n\nhttps://www.postgresql.org/message-id/flat/CACYgWUnvCSeiFXyw9%2BVqHVAd%2BfUxYhsrGxacRGJao63gznV9UQ%40mail.gmail.com\n\nAllowing whitespace before the special command was part of the design\nearly on, eg my proposal at\n\nhttps://www.postgresql.org/message-id/30157.1513058300%40sss.pgh.pa.us\n\nbut we eventually decided not to, see further down at\n\nhttps://www.postgresql.org/message-id/20180125204630.GA27619%40momjian.us\n\nIf you want to reverse that decision you need to present cogent arguments\nwhy, not just send in a patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Dec 2019 12:58:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql small improvement patch"
},
{
"msg_contents": "Thank you very much for your comments.\nI seem to have thought easily. I will reconsider.\n\nRegards.\n\n-----Original Message-----\nFrom: Mark Dilger [mailto:hornschnorter@gmail.com] \nSent: Sunday, December 8, 2019 12:20 AM\nTo: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>; pgsql-hackers@lists.postgresql.org\nSubject: Re: psql small improvement patch\n\n\n\nOn 12/7/19 5:23 AM, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n> Hi, Hackers.\n> \n> I propose a small improvement to the psql command.\n> Currently, psql's help/quit/exit command needs to start from the first position of the prompt.\n> For example, if you write a command after a space, the continuation \n> prompt\n> (PROMPT2) is displayed.\n> \n> ---\n> postgres=> \\set PROMPT2 'continue=>'\n> postgres=> <SPACE>help\n> continue=>\n> ---\n> \n> The attached patch allows the command to be executed ignoring leading white spaces.\n\nThank you for the patch. I took a look, and have two concerns.\n\nThe smaller concern is that psql uses isspace() to determine whether a character is whitespace in the current locale, so instead of using strspn(line, \" \\t\") you might want to use isspace() and be consistent.\nOn the other hand, there is precedent in other files for what you are doing, such as in src/fe_utils/print.c:\n\n /* is string only whitespace? */\n if ((*ptr)[strspn(*ptr, \" \\t\")] == '\\0')\n\n\nMy larger concern is that people may be using space before a command word to avoid having it be interpreted as a command. Take for example the following contrived psql interaction:\n\n\n mark=# create table help (str text);\n CREATE TABLE\n mark=# insert into help (str) values ('foo');\n INSERT 0 1\n mark=# select * from\n mark-# help\n mark-# where str is not null;\n str\n -----\n foo\n (1 row)\n\nIn the current unpatched psql, if I don't indent the second line, I get cruft in the output:\n\n mark=# select * from\n mark-# help\n Use \\? for help or press control-C to clear the input buffer.\n mark-# where str is not null;\n str\n -----\n foo\n (1 row)\n\nPatching psql as you propose would result in that cruft whether or not I indent the second line. We would need to consider if that behavior change is going to cause more problems for users than your patch is worth. How common is this problem you are trying to fix?\n\n\n--\nMark Dilger\n\n\n",
"msg_date": "Sat, 7 Dec 2019 22:54:25 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "RE: psql small improvement patch"
},
{
"msg_contents": "On Sat, Dec 7, 2019 at 12:58:12PM -0500, Tom Lane wrote:\n> \"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\" <noriyoshi.shinoda@hpe.com> writes:\n> > I propose a small improvement to the psql command.\n> > Currently, psql's help/quit/exit command needs to start from the first position of the prompt.\n> > The attached patch allows the command to be executed ignoring leading white spaces.\n> \n> Please read the (very long) thread in which this behavior was designed:\n> \n> https://www.postgresql.org/message-id/flat/CACYgWUnvCSeiFXyw9%2BVqHVAd%2BfUxYhsrGxacRGJao63gznV9UQ%40mail.gmail.com\n> \n> Allowing whitespace before the special command was part of the design\n> early on, eg my proposal at\n> \n> https://www.postgresql.org/message-id/30157.1513058300%40sss.pgh.pa.us\n> \n> but we eventually decided not to, see further down at\n> \n> https://www.postgresql.org/message-id/20180125204630.GA27619%40momjian.us\n> \n> If you want to reverse that decision you need to present cogent arguments\n> why, not just send in a patch.\n\nDo we need a C comment to document why no whitespace is allowed\nbefore it?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 21 Dec 2019 15:08:06 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: psql small improvement patch"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Do we need a C comment to document why no whitespace is allowed\n> before it?\n\nProbably, else we may not remember next time somebody wants to\nchange it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Dec 2019 15:42:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql small improvement patch"
},
{
"msg_contents": "On Sat, Dec 21, 2019 at 03:42:21PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Do we need a C comment to document why no whitespace is allowed\n> > before it?\n> \n> Probably, else we may not remember next time somebody wants to\n> change it.\n\nDone, applied to master only.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 21 Dec 2019 17:02:48 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: psql small improvement patch"
}
] |
[
{
"msg_contents": "It seems that explanation for the contents of the pid file has moved to\npidfile.h, but the comments in CreateLockFile() still point to miscadmin.h.\n\nThe attached patch updates those pointers.",
"msg_date": "Sat, 7 Dec 2019 23:40:27 -0800",
"msg_from": "Hadi Moshayedi <hadi@moshayedi.net>",
"msg_from_op": true,
"msg_subject": "Fix a comment in CreateLockFile"
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 1:10 PM Hadi Moshayedi <hadi@moshayedi.net> wrote:\n>\n> It seems that explanation for the contents of the pid file has moved to pidfile.h, but the comments in CreateLockFile() still point to miscadmin.h.\n>\n> The attached patch updates those pointers.\n>\n\nYour patch looks correct to me on a quick look. I will take of this tomorrow.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 8 Dec 2019 16:32:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix a comment in CreateLockFile"
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 4:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Dec 8, 2019 at 1:10 PM Hadi Moshayedi <hadi@moshayedi.net> wrote:\n> >\n> > It seems that explanation for the contents of the pid file has moved to pidfile.h, but the comments in CreateLockFile() still point to miscadmin.h.\n> >\n> > The attached patch updates those pointers.\n> >\n>\n> Your patch looks correct to me on a quick look. I will take of this tomorrow.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Dec 2019 09:19:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix a comment in CreateLockFile"
}
] |
[
{
"msg_contents": "Is there a way to force a meaningful parallel seq scan, or at least the\nplanning of one, when the planner wants a non-parallel one?\n\nUsually I can do things like with with enable_* setting, but if I `set\nenable_seqscan to off`, it penalizes the parallel seq scan 8 times harder\nthan it penalizes the non-parallel one, so the plan does not switch.\n\nIf I set `force_parallel_mode TO on` then I do get a parallel plan, but it\nis a degenerate one which tells me nothing I want to know.\n\nIf I `set parallel_tuple_cost = 0` (or in some cases to a negative number),\nI can force it switch, but that destroys the purpose, which is to see what\nthe \"would have been\" plan estimates are for the parallel seq scan under\nthe default setting of the cost parameters.\n\nI can creep parallel_tuple_cost downward until it switches, and then try to\nextrapolate back up, but this tedious and not very reliable.\n\nCheers,\n\nJeff\n\nIs there a way to force a meaningful parallel seq scan, or at least the planning of one, when the planner wants a non-parallel one?Usually I can do things like with with enable_* setting, but if I `set enable_seqscan to off`, it penalizes the parallel seq scan 8 times harder than it penalizes the non-parallel one, so the plan does not switch.If I set `force_parallel_mode TO on` then I do get a parallel plan, but it is a degenerate one which tells me nothing I want to know.If I `set parallel_tuple_cost = 0` (or in some cases to a negative number), I can force it switch, but that destroys the purpose, which is to see what the \n\n\"would have been\"\n\nplan estimates are for the parallel seq scan under the default setting of the cost parameters.I can creep parallel_tuple_cost downward until it switches, and then try to extrapolate back up, but this tedious and not very reliable.Cheers,Jeff",
"msg_date": "Sun, 8 Dec 2019 13:23:57 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "disable only nonparallel seq scan."
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 1:24 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> Is there a way to force a meaningful parallel seq scan, or at least the planning of one, when the planner wants a non-parallel one?\n>\n> Usually I can do things like with with enable_* setting, but if I `set enable_seqscan to off`, it penalizes the parallel seq scan 8 times harder than it penalizes the non-parallel one, so the plan does not switch.\n>\n> If I set `force_parallel_mode TO on` then I do get a parallel plan, but it is a degenerate one which tells me nothing I want to know.\n>\n> If I `set parallel_tuple_cost = 0` (or in some cases to a negative number), I can force it switch, but that destroys the purpose, which is to see what the \"would have been\" plan estimates are for the parallel seq scan under the default setting of the cost parameters.\n>\n> I can creep parallel_tuple_cost downward until it switches, and then try to extrapolate back up, but this tedious and not very reliable.\n\nI don't think there's a way to force this, but setting both\nparallel_setup_cost and parallel_tuple_cost to 0 seems to often be\nenough.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 10 Dec 2019 13:32:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: disable only nonparallel seq scan."
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 1:32 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Dec 8, 2019 at 1:24 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> > Is there a way to force a meaningful parallel seq scan, or at least the\n> planning of one, when the planner wants a non-parallel one?\n> >\n> > Usually I can do things like with with enable_* setting, but if I `set\n> enable_seqscan to off`, it penalizes the parallel seq scan 8 times harder\n> than it penalizes the non-parallel one, so the plan does not switch.\n> >\n> > If I set `force_parallel_mode TO on` then I do get a parallel plan, but\n> it is a degenerate one which tells me nothing I want to know.\n> >\n> > If I `set parallel_tuple_cost = 0` (or in some cases to a negative\n> number), I can force it switch, but that destroys the purpose, which is to\n> see what the \"would have been\" plan estimates are for the parallel seq scan\n> under the default setting of the cost parameters.\n> >\n> > I can creep parallel_tuple_cost downward until it switches, and then try\n> to extrapolate back up, but this tedious and not very reliable.\n>\n> I don't think there's a way to force this, but setting both\n> parallel_setup_cost and parallel_tuple_cost to 0 seems to often be\n> enough.\n>\n\nYes, that is fine if I want the actual execution results. And I patch\nguc.c to allow negative settings, for when some extra persuasion is needed.\n\nBut here I want to see what the planner is thinking, and changing the *cost\nsettings changes that thinking. So I want to force the planner to choose\nthe \"next-best\" plan under the original cost settings so I can see how far\naway they are from each other. I made a crude patch to add\nenable_singleseqscan, which has been letting me get at this information now.\n\nI'm not proposing to apply this particular patch to the code base, but I do\nwonder if we can do something about this \"dark spot\" which no combination\nof current enable_* setting seems to be able to get at.\n\nCheers,\n\nJeff",
"msg_date": "Fri, 13 Dec 2019 09:32:58 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: disable only nonparallel seq scan."
}
] |
[
{
"msg_contents": "Buildfarm member hyrax has been intermittently failing the\ndeadlock-parallel isolation test ever since that went in.\nI finally got around to looking at this closely, and what\nseems to be happening is simply that isolationtester.c's\nhard-wired three-minute timeout for the completion of any\none test step is triggering. hyrax uses CLOBBER_CACHE_ALWAYS\nand it seems to be a little slower than other animals using\nCLOBBER_CACHE_ALWAYS, so it's unsurprising that it's showing\nthe symptom and nobody else is.\n\nThere are two things we could do about this:\n\n1. Knock the hard-wired setting up a tad, maybe to 5 minutes.\nEasy but doesn't seem terribly future-proof.\n\n2. Make the limit configurable somehow, probably from an\nenvironment variable. There's precedent for that (PGCTLTIMEOUT),\nand it would provide a way for owners of especially slow buildfarm\nmembers to adjust things ... but it would require owners of\nespecially slow buildfarm animals to adjust things.\n\nAny preferences? (Actually, it wouldn't be unreasonable to do\nboth things, I suppose.)\n\nBTW, I notice that isolationtester.c fails to print any sort of warning\nnotice when it decides it's waited too long. This seems like a\nspectacularly bad idea in hindsight: it's not that obvious why the test\ncase failed. Plus there's no way to tell exactly which connection it\ndecided to send a PQcancel to. So independently of the timeout-length\nissue, I think we ought to also make it print something like\n\"isolationtester: waited too long for something to happen, canceling\nstep thus-and-so\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Dec 2019 17:08:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "hyrax versus isolationtester.c's hard-wired timeouts"
},
{
"msg_contents": "I wrote:\n> There are two things we could do about this:\n> 1. Knock the hard-wired setting up a tad, maybe to 5 minutes.\n> Easy but doesn't seem terribly future-proof.\n> 2. Make the limit configurable somehow, probably from an\n> environment variable. There's precedent for that (PGCTLTIMEOUT),\n> and it would provide a way for owners of especially slow buildfarm\n> members to adjust things ... but it would require owners of\n> especially slow buildfarm animals to adjust things.\n> Any preferences? (Actually, it wouldn't be unreasonable to do\n> both things, I suppose.)\n> BTW, I notice that isolationtester.c fails to print any sort of warning\n> notice when it decides it's waited too long. This seems like a\n> spectacularly bad idea in hindsight: it's not that obvious why the test\n> case failed. Plus there's no way to tell exactly which connection it\n> decided to send a PQcancel to. So independently of the timeout-length\n> issue, I think we ought to also make it print something like\n> \"isolationtester: waited too long for something to happen, canceling\n> step thus-and-so\".\n\nI pushed a patch doing all of the above. This should be enough to\nfix hyrax's problem without any manual adjustments of the animal's\nconfiguration ... unless I've misdiagnosed what's happening.\nWe shall see.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 14:34:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: hyrax versus isolationtester.c's hard-wired timeouts"
}
] |
[
{
"msg_contents": "Hackers,\n\nI'm thinking of submitting a patch, and would like to review my\ndesign ideas with you all before doing so. I've thought about\nthis problem before, but I can't find any email where I might\nhave already proposed this. If I did, and this is a duplicate,\nplease forgive me. I'm not trying to resubmit the idea if it\nhas already been rejected.\n\nI have been considering applications that maintain some current\nstate information in the face of frequently inserted time series\ndata. Assume you loop as follows, waiting for each new data file\ncoming out of a buffered application:\n\n COPY sometable FROM recent_timeseries_data.csv;\n SELECT some_aggregate_statistics\n FROM sometable st, another_table, and_another, ...\n WHERE st.sometimefield > pretty_recent AND ...\n GROUP BY some_aggregate_columns;\n\nIf you don't control how frequently that file gets written, you\ncould get lots of really short files frequently, or fewer larger\nfiles less often.\n\nEven with autovacuum set pretty aggressively on \"sometable\", you\nare likely to get a bad plan for the SELECT due to the statistics\non \"sometable\" for the \"sometimefield\" not taking into account the\nmost recently inserted rows. Some quick EXPLAIN checking verifies\nthat the number of rows predicted for a timeframe later than\nthe newest data as of the most recent ANALYZE will be 1. (I'd\nbe interested in counter examples -- I'm just looking at the\nresults of a quick-and-dirty test.)\n\nUpdating that loop to perform an ANALYZE between the COPY and the\nSELECT helps, but at the expense of potentially running ANALYZE\ntoo often when the recent_timeseries_data.csv files are short\nand frequent. Using a stored procedure to conditionally run the\nanalyze seems unnecessarily complicated.\n\nRelying on autovacuum to rescue you from bad plans seems foolishly\noptimistic, since it would need to run right between your COPY of\nnew data and your SELECT over that data. It is unclear how\nautovacuum could be modified to do this for you. Modifying the\nstatistics system to be predictive based on the state of affairs\nat the last ANALYZE and the number of changes since then seems\nmore promising, but pretty complicated. I might return to this\nidea in a future patch, but today I'm proposing something simpler.\n\nWould it make sense to add an optional parameter to VACUUM, ANALYZE,\nand VACUUM ANALYZE that instructs it to only perform the operation\nif autovacuum would do so under the current conditions? In other\nwords, to consider the PgStat_StatTabEntry's n_dead_tuples and\nn_live_tuples the same way autovacuum would? Something like:\n\n ANALYZE sometable IF AND ONLY IF AUTOVACUUM WOULD;\n\nA similar argument can be made for VACUUM, if you are trying to get\nthe visibility map updated prior to the SELECT so that an index only\nscan will be feasible. As for VACUUM ANALYZE, that has a similar\nuse case, with the downside that you don't know which thresholds\nto use, the ones for vacuum or for analyze. I think I'd implement\nit to run the VACUUM ANALYZE if either condition meets autovacuum's\nrequirements. (See autovacuum_vac_scale and autovacuum_anl_scale.)\n\nI think the words \"IF AND ONLY IF AUTOVACUUM WOULD\" should be\nreplaced with a single word and added to the grammar where\nvacuum_option_elem lists VERBOSE, FREEZE and FULL. Perhaps\n\"OPTIONALLY\", or \"AUTOVACUUMESQUE\", though I'm really hoping\nsomebody has a better suggestion.\n\nIn the given example, above, the user would likely set the vacuum\nand analyze scale factors to zero and the thresholds to something\nthey've empirically determined to work well for their purposes.\nThat might be a problem in practice, given that it also impacts\nautovacuum's choices. Would people prefer that those thresholds\nbe passed as parameters to the command directly?\n\n VACUUM sometable OPTIONALLY (vacuum_threshold = 10, vacuum_scale = 0)\n\nand only default to autovacuum's settings when not specified?\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Sun, 8 Dec 2019 16:12:04 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Statistics improvements for time series data"
},
{
"msg_contents": "On Sun, Dec 8, 2019 at 7:12 PM Mark Dilger <hornschnorter@gmail.com> wrote:\n> I think the words \"IF AND ONLY IF AUTOVACUUM WOULD\" should be\n> replaced with a single word and added to the grammar where\n> vacuum_option_elem lists VERBOSE, FREEZE and FULL. Perhaps\n> \"OPTIONALLY\", or \"AUTOVACUUMESQUE\", though I'm really hoping\n> somebody has a better suggestion.\n\nvacuum_option_elem doesn't exist any more, since commit\n6776142a07afb4c28961f27059d800196902f5f1.\n\nI think OPTIONALLY would be a fine keyword:\n\nVACUUM (OPTIONALLY) my_table;\nANALYZE (OPTIONALLY) my_table;\n\nIt wouldn't even need to be a parser keyword; see\ndisable_page_skipping for a comparable.\n\n> In the given example, above, the user would likely set the vacuum\n> and analyze scale factors to zero and the thresholds to something\n> they've empirically determined to work well for their purposes.\n> That might be a problem in practice, given that it also impacts\n> autovacuum's choices. Would people prefer that those thresholds\n> be passed as parameters to the command directly?\n>\n> VACUUM sometable OPTIONALLY (vacuum_threshold = 10, vacuum_scale = 0)\n>\n> and only default to autovacuum's settings when not specified?\n\nI think that syntax is a non-starter. We should try to fit into the\nexisting mold for vacuum options. But I don't see a reason why we\ncouldn't allow:\n\nVACUUM (OPTIONALLY, THRESHOLD 10, SCALE 0) my_table;\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 10 Dec 2019 13:40:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistics improvements for time series data"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe recently started seeing an error “ERROR: uncommitted xmin 347341220\nfrom before xid cutoff 967029200 needs to be frozen” on our user tables.\nI’m unable to do ‘vacuum’, ‘vacuum freeze’ or ‘vacuum full’ on the\naffected tables.\n\n From what I read, this was a bug couple of years ago on System tables and\nit was fixed long back.\nHowever, we are seeing these errors on two of our User tables now.\n\nAfter some Google search, I found the fix but, they seem to be temporary.\n\nThese are the solutions I found :\n1. Truncate the table and restore the dump\n\n2. remove ‘pg_internal.init’ from global directory\n\n\nI’m not yet sure about removing the file ‘pg_internal.init’. So, I would\ngo ahead with table rebuilt for now.\n\nAnyways, I would like to know if there is any permanent solution for this\nissue as I did not find a proper solution.\n\nWe are running Postgresql 9.6.10 on SUSE 12.4 OS.\nWe are already in process of upgrading to the latest minor version of 9.6\nand our Application supports only Postgres 9.6.x\n\nI request you to please take a look at it and let me know the fix.\n\nHi All,\n\nWe recently started seeing an error “ERROR: \nuncommitted xmin 347341220 from before xid cutoff 967029200 needs to be\nfrozen” on our user tables.\nI’m unable to do ‘vacuum’, ‘vacuum freeze’ or ‘vacuum full’ on the affected tables.\n\n From what I read, this was a bug couple of years ago on System tables and it\nwas fixed long back.\nHowever, we are seeing these errors on two of our User tables now.\n \nAfter some Google search, I found the fix but, they seem to be temporary. \n\n\nThese are the solutions I found : \n1. Truncate the table and restore the dump\n2. remove ‘pg_internal.init’ from global directory\n\nI’m not yet sure about removing the file ‘pg_internal.init’.\nSo, I would go ahead with table rebuilt for now.\n\n\nAnyways, I would like to know if there is any permanent\nsolution for this issue as I did not find a proper solution. \n\nWe are running Postgresql 9.6.10 on SUSE 12.4 OS. \nWe are already in process of upgrading to the latest minor version of 9.6 and our\nApplication supports only Postgres 9.6.x\n\nI request you to please take a look at it and let me know the fix.",
"msg_date": "Mon, 9 Dec 2019 09:44:30 +0530",
"msg_from": "rajesh kumar <vallarapurajesh@gmail.com>",
"msg_from_op": true,
"msg_subject": "ERROR: uncommitted xmin 347341220 from before xid cutoff 967029200\n needs to be frozen"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 4:52 AM rajesh kumar <vallarapurajesh@gmail.com> wrote:\n> We recently started seeing an error “ERROR: uncommitted xmin 347341220 from before xid cutoff 967029200 needs to be frozen” on our user tables.\n> I’m unable to do ‘vacuum’, ‘vacuum freeze’ or ‘vacuum full’ on the affected tables.\n>\n> From what I read, this was a bug couple of years ago on System tables and it was fixed long back.\n> However, we are seeing these errors on two of our User tables now.\n>\n> After some Google search, I found the fix but, they seem to be temporary.\n>\n> These are the solutions I found :\n> 1. Truncate the table and restore the dump\n>\n> 2. remove ‘pg_internal.init’ from global directory\n>\n>\n> I’m not yet sure about removing the file ‘pg_internal.init’. So, I would go ahead with table rebuilt for now.\n>\n> Anyways, I would like to know if there is any permanent solution for this issue as I did not find a proper solution.\n\nI think that the best thing to do would be to dump all of your data\nusing pg_dump, create a whole new cluster using initdb, restore the\ndata into the new cluster, and delete the old one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Dec 2019 09:35:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: uncommitted xmin 347341220 from before xid cutoff\n 967029200 needs to be frozen"
},
{
"msg_contents": "Hi Robert,\n\nThanks for your reply.\nSo, i have this question. I have seen a patch on similar issue with shared\ncatalog tables and it is fixed in PostgreSQL 9.6.10.\nWe are currently using 9.6.10.\nDo you think we hit another bug ?\nIs this because of some synchronization issue ?\n\nOr is there something i should do to avoid this issue in the future ?\n\nOn Mon, 9 Dec 2019, 20:05 Robert Haas, <robertmhaas@gmail.com> wrote:\n\n> On Mon, Dec 9, 2019 at 4:52 AM rajesh kumar <vallarapurajesh@gmail.com>\n> wrote:\n> > We recently started seeing an error “ERROR: uncommitted xmin 347341220\n> from before xid cutoff 967029200 needs to be frozen” on our user tables.\n> > I’m unable to do ‘vacuum’, ‘vacuum freeze’ or ‘vacuum full’ on the\n> affected tables.\n> >\n> > From what I read, this was a bug couple of years ago on System tables\n> and it was fixed long back.\n> > However, we are seeing these errors on two of our User tables now.\n> >\n> > After some Google search, I found the fix but, they seem to be temporary.\n> >\n> > These are the solutions I found :\n> > 1. Truncate the table and restore the dump\n> >\n> > 2. remove ‘pg_internal.init’ from global directory\n> >\n> >\n> > I’m not yet sure about removing the file ‘pg_internal.init’. So, I\n> would go ahead with table rebuilt for now.\n> >\n> > Anyways, I would like to know if there is any permanent solution for\n> this issue as I did not find a proper solution.\n>\n> I think that the best thing to do would be to dump all of your data\n> using pg_dump, create a whole new cluster using initdb, restore the\n> data into the new cluster, and delete the old one.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi Robert,Thanks for your reply.So, i have this question. I have seen a patch on similar issue with shared catalog tables and it is fixed in PostgreSQL 9.6.10. We are currently using 9.6.10. Do you think we hit another bug ? Is this because of some synchronization issue ? Or is there something i should do to avoid this issue in the future ?On Mon, 9 Dec 2019, 20:05 Robert Haas, <robertmhaas@gmail.com> wrote:On Mon, Dec 9, 2019 at 4:52 AM rajesh kumar <vallarapurajesh@gmail.com> wrote:\n> We recently started seeing an error “ERROR: uncommitted xmin 347341220 from before xid cutoff 967029200 needs to be frozen” on our user tables.\n> I’m unable to do ‘vacuum’, ‘vacuum freeze’ or ‘vacuum full’ on the affected tables.\n>\n> From what I read, this was a bug couple of years ago on System tables and it was fixed long back.\n> However, we are seeing these errors on two of our User tables now.\n>\n> After some Google search, I found the fix but, they seem to be temporary.\n>\n> These are the solutions I found :\n> 1. Truncate the table and restore the dump\n>\n> 2. remove ‘pg_internal.init’ from global directory\n>\n>\n> I’m not yet sure about removing the file ‘pg_internal.init’. So, I would go ahead with table rebuilt for now.\n>\n> Anyways, I would like to know if there is any permanent solution for this issue as I did not find a proper solution.\n\nI think that the best thing to do would be to dump all of your data\nusing pg_dump, create a whole new cluster using initdb, restore the\ndata into the new cluster, and delete the old one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Dec 2019 21:51:13 +0530",
"msg_from": "rajesh kumar <vallarapurajesh@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: uncommitted xmin 347341220 from before xid cutoff\n 967029200 needs to be frozen"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 11:21 AM rajesh kumar <vallarapurajesh@gmail.com> wrote:\n> Thanks for your reply.\n> So, i have this question. I have seen a patch on similar issue with shared catalog tables and it is fixed in PostgreSQL 9.6.10.\n> We are currently using 9.6.10.\n> Do you think we hit another bug ?\n> Is this because of some synchronization issue ?\n>\n> Or is there something i should do to avoid this issue in the future ?\n\nI mean, you haven't really provided any useful details that would\nenable me or anyone to guess how the problem happened in the first\nplace. It could be a bug, but you've just given a very high-level\nsummary of what happened, so who knows? Note that this list is for\ndevelopment of PostgreSQL, not technical support.\n\nOne thing to keep in mind is that the error is just a symptom of\ncorruption that happened earlier and was, in effect, detected by\nVACUUM. And those error checks were not there originally; those were\nback-patched into some relatively recent minor version. So it could be\nthat you were running an older version that had a bug and the problem\ngot created, and then when you upgraded to a newer version after that\nthe older corruption got detected by the new checks.\n\nIf you dump and restore, and if there's nothing in your environment\nthat can cause database corruption (bad hardware, bad kernel, bad\nfilesystem, more PostgreSQL bugs, bad backup-and-restore procedure,\nfsync=off, ...) then you shouldn't have any more corruption after\nthat. If you do, then there's a problem someplace, and a PostgreSQL\nbug is a likely but not certain culprit. However, if that's the case,\nyou'd need to provide lots of details about how to reproduce the\nproblem, or about how the problem happened, in order for somebody to\nbe able to fix it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Dec 2019 11:48:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: uncommitted xmin 347341220 from before xid cutoff\n 967029200 needs to be frozen"
}
] |
[
{
"msg_contents": "A small add-on to the generated columns feature: Add an ALTER TABLE \nsubcommand for dropping the generated property from a column, per SQL \nstandard.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 9 Dec 2019 08:53:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE support for dropping generation expression"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nHello\r\n\r\nPatch does not apply to master. Could you rebase?\r\n\r\nCode looks good and very similar to \"ALTER TABLE ... ALTER COLUMN ... DROP IDENTITY\"\r\n\r\nI noticed one bug:\r\n\r\ncreate table testdrop (i int, b int, m int GENERATED ALWAYS AS ( i*2) stored);\r\ninsert into testdrop(i,b) values (3,4);\r\nalter table testdrop alter COLUMN m drop expression ;\r\nalter table testdrop drop column i;\r\n\r\nHere is no \"m\" column anymore. Possible due some forgotten dependency?\r\n\r\nregards, Sergei\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Wed, 25 Dec 2019 11:01:34 +0000",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE support for dropping generation expression"
},
{
"msg_contents": "On 2019-12-25 12:01, Sergei Kornilov wrote:\n> Patch does not apply to master. Could you rebase?\n\ndone\n\n> I noticed one bug:\n> \n> create table testdrop (i int, b int, m int GENERATED ALWAYS AS ( i*2) stored);\n> insert into testdrop(i,b) values (3,4);\n> alter table testdrop alter COLUMN m drop expression ;\n> alter table testdrop drop column i;\n> \n> Here is no \"m\" column anymore. Possible due some forgotten dependency?\n\nfixed -- good catch\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 28 Dec 2019 18:05:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE support for dropping generation expression"
},
{
"msg_contents": "Hello\n\nThank you, but I am late: patch has another merge conflict.\n\nConflict seems trivial and patch looks fine for me.\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 10 Jan 2020 15:20:38 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE support for dropping generation expression"
},
{
"msg_contents": "On 2020-01-10 13:20, Sergei Kornilov wrote:\n> Thank you, but I am late: patch has another merge conflict.\n> \n> Conflict seems trivial and patch looks fine for me.\n\nHere is another patch version. I have resolved the conflict and also \nadded a check that you don't drop the generation expression from an \ninherited column.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 11 Jan 2020 07:35:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE support for dropping generation expression"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nThank you!\r\nLooks good to me. I have no further comments. I'll mark as ready for committer.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Mon, 13 Jan 2020 09:56:41 +0000",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE support for dropping generation expression"
},
{
"msg_contents": "On 2020-01-13 10:56, Sergei Kornilov wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: tested, passed\n> \n> Thank you!\n> Looks good to me. I have no further comments. I'll mark as ready for committer.\n> \n> The new status of this patch is: Ready for Committer\n\ncommitted, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Jan 2020 13:47:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE support for dropping generation expression"
}
] |
[
{
"msg_contents": "Per discussion in [0], here is a patch set to remove support for Python \nversions older than 2.6.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/6d3b7b69-0970-4d40-671a-268c46e93fe3@2ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 9 Dec 2019 11:37:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove support for old Python versions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Per discussion in [0], here is a patch set to remove support for Python \n> versions older than 2.6.\n\nI took a brief look through this and it seems reasonable. Two\nminor comments:\n\n* In the docs section beginning \"Context managers syntax using the with\nkeyword\", could we drop that entire <sect2>? It seems like it's now not\nsaying much more than \"you can use this standard python feature\", which\nis hardly surprising information.\n\n* I'm not sure it's a good idea to remove the test case you removed\nfrom plpython_subtransaction.sql. We still need to support user\ncode written that way, don't we?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 17:32:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove support for old Python versions"
},
{
"msg_contents": "On 2019-12-09 23:32, Tom Lane wrote:\n> * In the docs section beginning \"Context managers syntax using the with\n> keyword\", could we drop that entire <sect2>? It seems like it's now not\n> saying much more than \"you can use this standard python feature\", which\n> is hardly surprising information.\n\nThat section points out the existence of the subxact.enter() and \nsubxact.exit() methods. New code wouldn't need to use those, but \nsomeone might find them in old code, so it would be good to have them at \nleast mentioned somewhere. Maybe it could be rewritten, but I hesitate \nto remove it completely.\n\n> * I'm not sure it's a good idea to remove the test case you removed\n> from plpython_subtransaction.sql. We still need to support user\n> code written that way, don't we?\n\nThe main purpose of that test case was that older Python versions can \ntest this functionality at all, because most of the rest of the file \nwould fail with Python syntax errors around the \"with\" keyword. With \nnewer Python versions there is IMO no need to test both the \"with\" \nvariant and the equivalent __enter__+__exit__ variant separately, \nbecause that would just show that Python itself works correctly. Then \nagain, we could keep it for completeness and clarity.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Dec 2019 13:23:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove support for old Python versions"
},
{
"msg_contents": "On 2019-12-09 11:37, Peter Eisentraut wrote:\n> Per discussion in [0], here is a patch set to remove support for Python\n> versions older than 2.6.\n> \n> \n> [0]:\n> https://www.postgresql.org/message-id/6d3b7b69-0970-4d40-671a-268c46e93fe3@2ndquadrant.com\n\nIt appears that the removal of old OpenSSL support is stalled or \nabandoned for now, so this patch set is on hold for now as well, as far \nas I'm concerned. I have committed the change of the Python exception \nsyntax in the documentation, though.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 10:46:55 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove support for old Python versions"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 10:46:55AM +0100, Peter Eisentraut wrote:\n> It appears that the removal of old OpenSSL support is stalled or abandoned\n> for now, so this patch set is on hold for now as well, as far as I'm\n> concerned. I have committed the change of the Python exception syntax in\n> the documentation, though.\n\nYeah, the cleanup of the code related to OpenSSL 0.9.8 and 1.0.0 was\nrather limited, which was disappointing, so I am still hesitating in\nmerging that into the tree. Doing more cleanup with the past Python\nversions gives a good extra argument though, so I'll send a notice on\nthe OpenSSL thread here:\nhttps://www.postgresql.org/message-id/20191205083252.GE5064@paquier.xyz\n--\nMichael",
"msg_date": "Thu, 2 Jan 2020 22:47:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove support for old Python versions"
},
{
"msg_contents": "On 2019-12-31 10:46, Peter Eisentraut wrote:\n> On 2019-12-09 11:37, Peter Eisentraut wrote:\n>> Per discussion in [0], here is a patch set to remove support for Python\n>> versions older than 2.6.\n>>\n>>\n>> [0]:\n>> https://www.postgresql.org/message-id/6d3b7b69-0970-4d40-671a-268c46e93fe3@2ndquadrant.com\n> \n> It appears that the removal of old OpenSSL support is stalled or\n> abandoned for now, so this patch set is on hold for now as well, as far\n> as I'm concerned. I have committed the change of the Python exception\n> syntax in the documentation, though.\n\nSince the OpenSSL patch went ahead, I have now committed this one as well.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jan 2020 23:04:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove support for old Python versions"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 11:04:48PM +0100, Peter Eisentraut wrote:\n> Since the OpenSSL patch went ahead, I have now committed this one as well.\n\nYippee.\n--\nMichael",
"msg_date": "Thu, 9 Jan 2020 12:09:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove support for old Python versions"
},
{
"msg_contents": "Rémi,\n\nplease update your build farm member \"locust\" to a new Python version \n(>=2.6) for the master branch, or disable the Python option.\n\nThanks.\n\nOn 2020-01-08 23:04, Peter Eisentraut wrote:\n> On 2019-12-31 10:46, Peter Eisentraut wrote:\n>> On 2019-12-09 11:37, Peter Eisentraut wrote:\n>>> Per discussion in [0], here is a patch set to remove support for Python\n>>> versions older than 2.6.\n>>>\n>>>\n>>> [0]:\n>>> https://www.postgresql.org/message-id/6d3b7b69-0970-4d40-671a-268c46e93fe3@2ndquadrant.com\n>>\n>> It appears that the removal of old OpenSSL support is stalled or\n>> abandoned for now, so this patch set is on hold for now as well, as far\n>> as I'm concerned. I have committed the change of the Python exception\n>> syntax in the documentation, though.\n> \n> Since the OpenSSL patch went ahead, I have now committed this one as well.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 15:15:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove support for old Python versions"
}
] |
[
{
"msg_contents": "I was playing with the Unicode normalization test in \nsrc/common/unicode/. I think there is something wrong with how the test \nprogram reports failures. For example, if I manually edit the \nnorm_test_table.h to make a failure, like\n\n- { 74, { 0x00A8, 0 }, { 0x0020, 0x0308, 0 } },\n+ { 74, { 0x00A8, 0 }, { 0x0020, 0x0309, 0 } },\n\nthen the output from the test is\n\nFAILURE (NormalizationTest.txt line 74):\ninput:\t00\nexpected:\t0003\ngot\t0003\n\nwhich doesn't make sense.\n\nThere appear to be several off-by-more-than-one errors in norm_test.c \nprint_wchar_str(). Attached is a patch to fix this (and make the output \na bit prettier). Result afterwards:\n\nFAILURE (NormalizationTest.txt line 74):\ninput: U+00A8\nexpected: U+0020 U+0309\ngot: U+0020 U+0308\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 9 Dec 2019 12:05:30 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Unicode normalization test broken output"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> There appear to be several off-by-more-than-one errors in norm_test.c \n> print_wchar_str(). Attached is a patch to fix this (and make the output \n> a bit prettier). Result afterwards:\n\nI concur that this looks broken and your patch improves it.\nBut I'm not very happy about the remaining assumption that\nwe don't have to worry about characters above U+FFFF. I'd\nrather see it allocate 11 bytes per allowed pg_wchar, and\nmanage the string contents with something like\n\n\tp += sprintf(p, \"U+%04X \", *s);\n\nAn alternative fix would be to start using a PQExpBuffer, but\nit's probably not quite worth that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 17:22:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization test broken output"
},
{
"msg_contents": "On 2019-12-09 23:22, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> There appear to be several off-by-more-than-one errors in norm_test.c\n>> print_wchar_str(). Attached is a patch to fix this (and make the output\n>> a bit prettier). Result afterwards:\n> \n> I concur that this looks broken and your patch improves it.\n> But I'm not very happy about the remaining assumption that\n> we don't have to worry about characters above U+FFFF. I'd\n> rather see it allocate 11 bytes per allowed pg_wchar, and\n> manage the string contents with something like\n> \n> \tp += sprintf(p, \"U+%04X \", *s);\n\nGood point. Fixed in attached patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 10 Dec 2019 13:18:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization test broken output"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Good point. Fixed in attached patch.\n\nThis one LGTM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Dec 2019 11:16:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization test broken output"
},
{
"msg_contents": "On 2019-12-10 17:16, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Good point. Fixed in attached patch.\n> \n> This one LGTM.\n\ndone, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Dec 2019 08:45:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization test broken output"
}
] |
[
{
"msg_contents": "Hello PostgreSQL hackers,\n\nI hope I am posting on the right mailing-list.\n\nI am actually doing a PhD related to relational databases and software engineering.\n\nI use PostgreSQL for my research.\n\nI have a few questions about the internals of PostgreSQL and I think they require experts knowledge.\n\nI could not find documentation about that in the nice PostgreSQL documentation but maybe I missed something? Tell me if it is the case.\n\nMy Questions:\n\nQ1. Are PostgreSQL's meta-description tables (such as pg_class) the \"reality\" concerning the state of the DB or are they just a virtual representation ?\n\nWhat I would like to know with this question is: would it be possible to implement DDL queries (e.g. CREATE TABLE, DROP TABLE, CREATE VIEW, ALTER TABLE, etc.) as DML queries that modify the meta-data stored in meta-description tables?\n\nFor example, something like:\n\nINSERT INTO pg_class [...];\n\nTo create a new table (instead of the CREATE TABLE DDL query).\n\nQ1.1 If it is possible, is what is done in reality? I have the feeling that it is not the case and that DDL queries are implemented in C directly.\n\nQ1.2 If it is possible and not done, what is the reason?\n\n--\n\nQ2. Are PostgreSQL's \"meta-constraints\" (i.e. constraints related to database structure such as \"a table can only have a single primary key\") implemented in C code or via data constraints on PostgreSQL's meta-description tables?\n\nI guess, again, they are implemented in C code, but I might be wrong.\n\nQ2.1 If they are not implemented via data constraints on meta-description tables, why ?\n\nQ2.2 Is there somewhere in the documentation a list of such \"meta-constraints\" implemented by PostgreSQL?\n\n\nThanks in advance for answering my questions, any help or pointers to existing documentation will be appreciated.\n\nCheers,\n\nJulien Delplanque\n\n\n\n",
"msg_date": "Mon, 9 Dec 2019 16:35:01 +0100",
"msg_from": "Julien Delplanque <julien.delplanque@inria.fr>",
"msg_from_op": true,
"msg_subject": "Questions about PostgreSQL implementation details"
},
{
"msg_contents": "\n\nOn 12/9/19 7:35 AM, Julien Delplanque wrote:\n> Hello PostgreSQL hackers,\n> \n> I hope I am posting on the right mailing-list.\n> \n> I am actually doing a PhD related to relational databases and software \n> engineering.\n> \n> I use PostgreSQL for my research.\n> \n> I have a few questions about the internals of PostgreSQL and I think \n> they require experts knowledge.\n> \n> I could not find documentation about that in the nice PostgreSQL \n> documentation but maybe I missed something? Tell me if it is the case.\n> \n> My Questions:\n> \n> Q1. Are PostgreSQL's meta-description tables (such as pg_class) the \n> \"reality\" concerning the state of the DB or are they just a virtual \n> representation ?\n\nNot all of them are real tables; some of the pg_catalog relations are\nviews over others of them. But many of them are real tables with C\nstructs that back them. Take a look in src/include/catalog/pg_class.h\nand you'll see the C struct definition, somewhat obscured by some\nmacros that make it less obvious to people not familiar with the\npostgresql sources.\n\nOn line 29:\n\nCATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP \nBKI_ROWTYPE_OID(83,RelationRelation_Rowtype_Id) BKI_SCHEMA_MACRO\n{\n...\n}\n\nThat's a typedef. See genbki.h where it defines the macro:\n\n#define CATALOG(name,oid,oidmacro)\ttypedef struct CppConcat(FormData_,name)\n\n> What I would like to know with this question is: would it be possible to \n> implement DDL queries (e.g. CREATE TABLE, DROP TABLE, CREATE VIEW, ALTER \n> TABLE, etc.) as DML queries that modify the meta-data stored in \n> meta-description tables?\n> \n> For example, something like:\n> \n> INSERT INTO pg_class [...];\n> \n> To create a new table (instead of the CREATE TABLE DDL query).\n\nYou are not allowed to insert into the pg_class table directly. There\nare good reasons for that. Simply inserting a row into this table would\nnot cause all the infrastructure that backs a table to pop into\nexistence. So you have to use the DDL commands.\n\n> \n> Q1.1 If it is possible, is what is done in reality? I have the feeling \n> that it is not the case and that DDL queries are implemented in C directly.\n\nSee src/backend/commands/tablecmds.c, function DefineRelation.\n\n\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Mon, 9 Dec 2019 07:52:59 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
},
{
"msg_contents": "Julien Delplanque <julien.delplanque@inria.fr> writes:\n> I have a few questions about the internals of PostgreSQL and I think they require experts knowledge.\n\n> Q1. Are PostgreSQL's meta-description tables (such as pg_class) the \"reality\" concerning the state of the DB or are they just a virtual representation ?\n\nThe system catalogs are reality as far as the effects of DDL go. In the\nparticular case of pg_class, there is additional reality, which is that\n(most) pg_class rows represent one or more data files on-disk. You could\nin fact simulate many DDL operations by manual inserts/deletes/updates on\nsystem catalogs; but that would not result in any actions on the data\nfiles, so it falls down for the specific cases of CREATE/DROP TABLE,\nCREATE/DROP INDEX, etc.\n\n> What I would like to know with this question is: would it be possible to implement DDL queries (e.g. CREATE TABLE, DROP TABLE, CREATE VIEW, ALTER TABLE, etc.) as DML queries that modify the meta-data stored in meta-description tables?\n\nUnderneath, many of those operations are just catalog manipulations,\nso yes up to the point where you need to do something that impacts\nuser data storage.\n\n(In practice, getting all the details right from a SQL client\nwould be a pretty painful thing, so I'm not sure I see the point.\nUsually, modifying the PG C code or writing an extension would be\na saner approach to modifying the system's behavior.)\n\n> Q2. Are PostgreSQL's \"meta-constraints\" (i.e. constraints related to database structure such as \"a table can only have a single primary key\") implemented in C code or via data constraints on PostgreSQL's meta-description tables?\n\nMostly code. The only real constraints on the system catalogs, in the\nsense of something that would reject an ill-advised low-level update,\nare the unique indexes. There are also NOT NULL markers on many of\nthe catalogs' columns, but those are only enforced against manual\nSQL updates not updates made by C code. This is all somewhat historical,\nI suppose, but it's worked well enough for us.\n\n> Thanks in advance for answering my questions, any help or pointers to existing documentation will be appreciated.\n\nThere's no substitute for reading the source code. Unlike some other\nsystems, PG was developed as open source from the beginning, so there\nis not much in the way of a \"theory of operations manual\" or other\nseparate internals documentation --- people with these sorts of\nquestions are expected to go read the code. You could perhaps start\nby skimming\n\nhttps://www.postgresql.org/docs/devel/internals.html\n\nand then go look for README files in the portions of the source\ntree that interest you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 11:12:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
},
{
"msg_contents": "Hi Julien!\n\nOn 09/12/2019 17:35, Julien Delplanque wrote:\n> Q1. Are PostgreSQL's meta-description tables (such as pg_class) the \"reality\" concerning the state of the DB or are they just a virtual representation ?\n\nYes, the catalog tables are the authoritative source. The system uses \nthose tables internally to get the information too.\n\nSome of the pg_* relations are just views over other catalog tables, though.\n\n> What I would like to know with this question is: would it be possible to implement DDL queries (e.g. CREATE TABLE, DROP TABLE, CREATE VIEW, ALTER TABLE, etc.) as DML queries that modify the meta-data stored in meta-description tables?\n> \n> For example, something like:\n> \n> INSERT INTO pg_class [...];\n> \n> To create a new table (instead of the CREATE TABLE DDL query).\n> \n> Q1.1 If it is possible, is what is done in reality? I have the feeling that it is not the case and that DDL queries are implemented in C directly.\n> \n> Q1.2 If it is possible and not done, what is the reason?\n\nThe C code for the DDL commands do some things in addition to modifying \nthe catalog tables. Notably for CREATE TABLE, it creates the relation \nfile in the data directory, where all the data is stored. It also \nhandles locking, invalidating various caches, firing event triggers etc. \nExcept for creating relation files, those other things happen just in \nmemory, though.\n\nIt is not supported, and please don't do it in production, but you could \ntry it out. Set \"allow_system_table_mods=on\", and insert to pg_class, \npg_attribute, etc. See how well it works. Beware that there are internal \ncaches, called \"syscaches\", in backends over the catalog tables, so if \nyou modify them directly, you may need to restart for the changes to \ntake effect.\n\n> Q2. Are PostgreSQL's \"meta-constraints\" (i.e. constraints related to database structure such as \"a table can only have a single primary key\") implemented in C code or via data constraints on PostgreSQL's meta-description tables?\n> \n> I guess, again, they are implemented in C code, but I might be wrong.\n\nIn C code.\n\n> Q2.1 If they are not implemented via data constraints on meta-description tables, why ?\n\nI think there are some restrictions that cannot easily be represented as \nconstraints. Also, we've never supported constraints on catalog tables, \nso no one's given much thought to what it would look like if we did.\n\n> Q2.2 Is there somewhere in the documentation a list of such \"meta-constraints\" implemented by PostgreSQL?\n\nHmm, I don't think there is. Your best bet is to just look at the C \ncode, I'm afraid.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 9 Dec 2019 18:13:13 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
},
{
"msg_contents": "\n\nOn 12/9/19 7:52 AM, Mark Dilger wrote:\n>>\n>> Q1.1 If it is possible, is what is done in reality? I have the feeling \n>> that it is not the case and that DDL queries are implemented in C \n>> directly.\n> \n> See src/backend/commands/tablecmds.c, function DefineRelation.\n\nI realize I could be a bit more helpful, here. For a SQL command\nlike \"CREATE TABLE\", you can first look in src/backend/parser/gram.y\nfor the grammar rule. In this case, searching for CREATE TABLE\nleads you to a banner comment around line 3132. The rule for\nCreateStmt creates a node of type CreateStmt. That leads you\nto a struct of the same name in src/include/nodes/parsenodes.h.\nYou can see all the fields of that struct, and reconcile those\nagainst what the code in gram.y is doing to populate those\nfields. You can then look in src/backend/tcop/utility.c for\nthe T_CreateStmt nodeTag, and you'll find that it appears in\na few switch statements. One of those in ProcessUtilitySlow\ncalls DefineRelation along with a bunch of other stuff.\n\nThat's how you can trace these commands. Let's take DROP TABLE\nas a second example. This one is harder, because \"DROP TABLE\"\ndoesn't exist exactly. It's part of a larger grammar production\nfor DropStmt that includes other DROP commands. All the same,\nyou can see there is a rule for DropStmt that creates a node of\ntype DropStmt. Looking in src/include/nodes/parsenodes.h you\ncan see the struct of the same name. Looking in tcop/utility.c\nyou see T_DropStmt is handled in a few switch statements, and\nseem to run through ProcessUtilitySlow and ExecDropStmt. The\nfunction ExecDropStmt has a switch over stmt->removeType, which\nwas populated back in gram.y to the value OBJECT_TABLE. That\nnow serves to select the RemoveRelations function.\n\nThis sort of analysis is fairly useful for getting from a SQL\nstatement to the code in src/backend/commands/ that implements\nthe guts of that statement.\n\nI hope this helps.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Mon, 9 Dec 2019 08:31:51 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> [ useful tips about finding the code that implements a SQL command ]\n\nBTW, if it wasn't obvious already, you *really* want to have some kind\nof tool that easily finds the definition of a particular C symbol.\nYou can fall back on \"grep -r\" or \"git grep\", but lots of people use\nctags or etags or some other C-aware indexing tool.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Dec 2019 12:21:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
},
{
"msg_contents": "Thanks for your answer, I will dive into the C code then.\n\nLe 9/12/19 à 16:52, Mark Dilger a écrit :\n>\n> Not all of them are real tables; some of the pg_catalog relations are\n> views over others of them. But many of them are real tables with C\n> structs that back them. Take a look in src/include/catalog/pg_class.h\n> and you'll see the C struct definition, somewhat obscured by some\n> macros that make it less obvious to people not familiar with the\n> postgresql sources. \n\nIndeed, I should have say it in my email, I saw in the documentation \nthat some of them are views. :-)\n\nCheers,\n\nJulien\n\nLe 9/12/19 à 16:52, Mark Dilger a écrit :\n>\n>\n> On 12/9/19 7:35 AM, Julien Delplanque wrote:\n>> Hello PostgreSQL hackers,\n>>\n>> I hope I am posting on the right mailing-list.\n>>\n>> I am actually doing a PhD related to relational databases and \n>> software engineering.\n>>\n>> I use PostgreSQL for my research.\n>>\n>> I have a few questions about the internals of PostgreSQL and I think \n>> they require experts knowledge.\n>>\n>> I could not find documentation about that in the nice PostgreSQL \n>> documentation but maybe I missed something? Tell me if it is the case.\n>>\n>> My Questions:\n>>\n>> Q1. Are PostgreSQL's meta-description tables (such as pg_class) the \n>> \"reality\" concerning the state of the DB or are they just a virtual \n>> representation ?\n>\n> Not all of them are real tables; some of the pg_catalog relations are\n> views over others of them. But many of them are real tables with C\n> structs that back them. Take a look in src/include/catalog/pg_class.h\n> and you'll see the C struct definition, somewhat obscured by some\n> macros that make it less obvious to people not familiar with the\n> postgresql sources.\n>\n> On line 29:\n>\n> CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP \n> BKI_ROWTYPE_OID(83,RelationRelation_Rowtype_Id) BKI_SCHEMA_MACRO\n> {\n> ...\n> }\n>\n> That's a typedef. See genbki.h where it defines the macro:\n>\n> #define CATALOG(name,oid,oidmacro) typedef struct \n> CppConcat(FormData_,name)\n>\n>> What I would like to know with this question is: would it be possible \n>> to implement DDL queries (e.g. CREATE TABLE, DROP TABLE, CREATE VIEW, \n>> ALTER TABLE, etc.) as DML queries that modify the meta-data stored in \n>> meta-description tables?\n>>\n>> For example, something like:\n>>\n>> INSERT INTO pg_class [...];\n>>\n>> To create a new table (instead of the CREATE TABLE DDL query).\n>\n> You are not allowed to insert into the pg_class table directly. There\n> are good reasons for that. Simply inserting a row into this table would\n> not cause all the infrastructure that backs a table to pop into\n> existence. So you have to use the DDL commands.\n>\n>>\n>> Q1.1 If it is possible, is what is done in reality? I have the \n>> feeling that it is not the case and that DDL queries are implemented \n>> in C directly.\n>\n> See src/backend/commands/tablecmds.c, function DefineRelation.\n>\n>\n>\n\n\n",
"msg_date": "Mon, 9 Dec 2019 19:09:54 +0100",
"msg_from": "Julien Delplanque <julien.delplanque@inria.fr>",
"msg_from_op": true,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
},
{
"msg_contents": "On Tue, 10 Dec 2019 at 01:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Mark Dilger <hornschnorter@gmail.com> writes:\n> > [ useful tips about finding the code that implements a SQL command ]\n>\n> BTW, if it wasn't obvious already, you *really* want to have some kind\n> of tool that easily finds the definition of a particular C symbol.\n> You can fall back on \"grep -r\" or \"git grep\", but lots of people use\n> ctags or etags or some other C-aware indexing tool.\n>\n>\nI strongly recommend cscope with editor integration for your preferred\neditor btw.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 10 Dec 2019 at 01:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:Mark Dilger <hornschnorter@gmail.com> writes:\n> [ useful tips about finding the code that implements a SQL command ]\n\nBTW, if it wasn't obvious already, you *really* want to have some kind\nof tool that easily finds the definition of a particular C symbol.\nYou can fall back on \"grep -r\" or \"git grep\", but lots of people use\nctags or etags or some other C-aware indexing tool. I strongly recommend cscope with editor integration for your preferred editor btw. -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 13 Dec 2019 13:33:49 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
},
{
"msg_contents": "On Mon, 9 Dec 2019 at 23:35, Julien Delplanque <julien.delplanque@inria.fr>\nwrote:\n\n> Hello PostgreSQL hackers,\n>\n> I hope I am posting on the right mailing-list.\n>\n> I am actually doing a PhD related to relational databases and software\n> engineering.\n>\n> I use PostgreSQL for my research.\n>\n> I have a few questions about the internals of PostgreSQL and I think they\n> require experts knowledge.\n>\n> I could not find documentation about that in the nice PostgreSQL\n> documentation but maybe I missed something? Tell me if it is the case.\n>\n\nThere are a bunch of README files in the source tree that concern various\ninnards of PostgreSQL. They're not always referred to by any comments etc,\nso you have to know they exist. They're usually well worth reading, though\nit can take a while before you understand enough of PostgreSQL's\narchitecture for them to make sense...\n\nTry\n\n find src/ -name README\\*\n\n\n> Q1. Are PostgreSQL's meta-description tables (such as pg_class) the\n> \"reality\" concerning the state of the DB or are they just a virtual\n> representation ?\n>\n\nThat's been largely answered. But I want to point out an important caveat\nthat isn't obvious to new people: The oid of a relation (pg_class.oid) is\nnot the same thing as the pg_class.relfilenode, which is usually the base\nof the filename of the on-disk storage for the relation. On an idle or new\ndatabase most relations are created with an equal oid and relfilename, so\nit's easy to think the oid maps to the on-disk name of a relation, but it\ndoesn't. The relation oid will not change so long as the relation exists,\nbut the relfilenode may change if the table contents are rewritten, etc.\nAdditionally, there are special tables that are \"relmapped\" such that they\ndon't have a normal relfilenode at all, instead access is indirected via a\nseparate mapping. IIRC that's mainly necessary so we can bootstrap access\nto the catalog tables that tell us how to read the catalogs.\n\nWhat I would like to know with this question is: would it be possible to\n> implement DDL queries (e.g. CREATE TABLE, DROP TABLE, CREATE VIEW, ALTER\n> TABLE, etc.) as DML queries that modify the meta-data stored in\n> meta-description tables?\n>\n\nNot really.\n\nPostgreSQL has a caching layer - sycache, relcache, catcache - and\ninvalidation scheme that it relies on. It doesn't execute regular queries\non the system catalogs. It also has simplifying rules around how they are\nupdated and accessed. See the logic in genam.c etc. Catalogs may also\nrepresent things that aren't just other DB rows - for example, pg_class\nentries are associated with files on disk for individual database tables.\n\nYou can't just insert into pg_class, pg_attribute, etc and expect that to\nsafely create a table. Though it's surprising how much you can get away\nwith by hacking the catalogs if you're very careful and you trick\nPostgreSQL into firing appropriate invalidations. I'd quite like to have a\nSQL-exposed way to do a forced global cache flush and invalidation for use\nin emergency scary catalog hacking situations.\n\nSo you can do quite a bit with direct catalog surgery, but it's dangerous\nand if you break the database, you get to keep the pieces.\n\nQ1.1 If it is possible, is what is done in reality? I have the feeling that\n> it is not the case and that DDL queries are implemented in C directly.\n>\n\nRight. See standard_ProcessUtility() and friends.\n\nQ1.2 If it is possible and not done, what is the reason?\n>\n\nSpeed - no need to run the full executor. Simplification of catalog access.\nCaching and invalidations. Chicken/egg problems: how do you \"CREATE TABLE\npg_class\"? . Lots more.\n\n\n> Q2. Are PostgreSQL's \"meta-constraints\" (i.e. constraints related to\n> database structure such as \"a table can only have a single primary key\")\n> implemented in C code or via data constraints on PostgreSQL's\n> meta-description tables?\n>\n\nSystem catalogs are not permitted to have CONSTRAINTs (CHECK constraints,\nUNIQUE constraints, PRIMARY KEY constraints, FOREIGN KEY constraints, etc).\n\nAll such management is done in C level logic with the assistance of the\npg_depend catalog and the relationships it tracks.\n\n\n> Q2.1 If they are not implemented via data constraints on meta-description\n> tables, why ?\n>\n\nSame as above.\n\n\n> Q2.2 Is there somewhere in the documentation a list of such\n> \"meta-constraints\" implemented by PostgreSQL?\n>\n\nNot AFAIK.\n\nWhy?\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Mon, 9 Dec 2019 at 23:35, Julien Delplanque <julien.delplanque@inria.fr> wrote:Hello PostgreSQL hackers,\n\nI hope I am posting on the right mailing-list.\n\nI am actually doing a PhD related to relational databases and software engineering.\n\nI use PostgreSQL for my research.\n\nI have a few questions about the internals of PostgreSQL and I think they require experts knowledge.\n\nI could not find documentation about that in the nice PostgreSQL documentation but maybe I missed something? Tell me if it is the case.There are a bunch of README files in the source tree that concern various innards of PostgreSQL. They're not always referred to by any comments etc, so you have to know they exist. They're usually well worth reading, though it can take a while before you understand enough of PostgreSQL's architecture for them to make sense...Try find src/ -name README\\* \nQ1. Are PostgreSQL's meta-description tables (such as pg_class) the \"reality\" concerning the state of the DB or are they just a virtual representation ?That's been largely answered. But I want to point out an important caveat that isn't obvious to new people: The oid of a relation (pg_class.oid) is not the same thing as the pg_class.relfilenode, which is usually the base of the filename of the on-disk storage for the relation. On an idle or new database most relations are created with an equal oid and relfilename, so it's easy to think the oid maps to the on-disk name of a relation, but it doesn't. The relation oid will not change so long as the relation exists, but the relfilenode may change if the table contents are rewritten, etc. Additionally, there are special tables that are \"relmapped\" such that they don't have a normal relfilenode at all, instead access is indirected via a separate mapping. IIRC that's mainly necessary so we can bootstrap access to the catalog tables that tell us how to read the catalogs.What I would like to know with this question is: would it be possible to implement DDL queries (e.g. CREATE TABLE, DROP TABLE, CREATE VIEW, ALTER TABLE, etc.) as DML queries that modify the meta-data stored in meta-description tables?Not really.PostgreSQL has a caching layer - sycache, relcache, catcache - and invalidation scheme that it relies on. It doesn't execute regular queries on the system catalogs. It also has simplifying rules around how they are updated and accessed. See the logic in genam.c etc. Catalogs may also represent things that aren't just other DB rows - for example, pg_class entries are associated with files on disk for individual database tables.You can't just insert into pg_class, pg_attribute, etc and expect that to safely create a table. Though it's surprising how much you can get away with by hacking the catalogs if you're very careful and you trick PostgreSQL into firing appropriate invalidations. I'd quite like to have a SQL-exposed way to do a forced global cache flush and invalidation for use in emergency scary catalog hacking situations.So you can do quite a bit with direct catalog surgery, but it's dangerous and if you break the database, you get to keep the pieces. Q1.1 If it is possible, is what is done in reality? I have the feeling that it is not the case and that DDL queries are implemented in C directly.Right. See standard_ProcessUtility() and friends.Q1.2 If it is possible and not done, what is the reason?Speed - no need to run the full executor. Simplification of catalog access. Caching and invalidations. Chicken/egg problems: how do you \"CREATE TABLE pg_class\"? . Lots more. \nQ2. Are PostgreSQL's \"meta-constraints\" (i.e. constraints related to database structure such as \"a table can only have a single primary key\") implemented in C code or via data constraints on PostgreSQL's meta-description tables?System catalogs are not permitted to have CONSTRAINTs (CHECK constraints, UNIQUE constraints, PRIMARY KEY constraints, FOREIGN KEY constraints, etc).All such management is done in C level logic with the assistance of the pg_depend catalog and the relationships it tracks. Q2.1 If they are not implemented via data constraints on meta-description tables, why ?Same as above. Q2.2 Is there somewhere in the documentation a list of such \"meta-constraints\" implemented by PostgreSQL?Not AFAIK.Why?-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 13 Dec 2019 13:48:46 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Questions about PostgreSQL implementation details"
}
] |
[
{
"msg_contents": "I have a table with about 7 million records. I had a query in which I\nneeded 2 indexes added, one for a created timestamp field another for an id\nfield; both very high cardinality.\n\nFirst I noticed the query would not use the timestamp index no matter what\nsession config settings I used. I finally created a temp table copy of the\ntable and verified index is used. Then I rebuilt the main table with\nVACUUM FULL and this caused the index to be used.\n\nI repeated this process again for an id index on same table. I created the\nindex and it would never be chosen no matter what, until I rebuilt the\ntable using VAC FULL.\n\nI have run bt_index_check and bt_index_parent_check with heapallindexed on\none of these indexes but nothing comes up.\n\nBut one other noteworthy thing is that a cluster restart appears to fix the\nissue, because on a snapshot of this system (which has been restarted) also\nat 11.6, the planner picks up the index.\n\nWe quite recently (~ 2-3 weeks) did pgupgrade from 9.6 to 11. This table\nin question is fed via pglogical. I checked similar behavior on another\ntable in this stream and could not reproduce it. So for now, it seems\nlimited to this one table.\n\nAny suggestions as to how I could verify what is going on here? Anyone\nexperienced the same?\n\nThanks!\nJeremy\n\nI have a table with about 7 million records. I had a query in which I needed 2 indexes added, one for a created timestamp field another for an id field; both very high cardinality.First I noticed the query would not use the timestamp index no matter what session config settings I used. I finally created a temp table copy of the table and verified index is used. Then I rebuilt the main table with VACUUM FULL and this caused the index to be used.I repeated this process again for an id index on same table. I created the index and it would never be chosen no matter what, until I rebuilt the table using VAC FULL.I have run bt_index_check and bt_index_parent_check with heapallindexed on one of these indexes but nothing comes up.But one other noteworthy thing is that a cluster restart appears to fix the issue, because on a snapshot of this system (which has been restarted) also at 11.6, the planner picks up the index.We quite recently (~ 2-3 weeks) did pgupgrade from 9.6 to 11. This table in question is fed via pglogical. I checked similar behavior on another table in this stream and could not reproduce it. So for now, it seems limited to this one table.Any suggestions as to how I could verify what is going on here? Anyone experienced the same?Thanks!Jeremy",
"msg_date": "Mon, 9 Dec 2019 11:59:52 -0600",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Index corruption / planner issue with one table in my pg 11.6\n instance"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 1:00 PM Jeremy Finzel <finzelj@gmail.com> wrote:\n\n> I have a table with about 7 million records. I had a query in which I\n> needed 2 indexes added, one for a created timestamp field another for an id\n> field; both very high cardinality.\n>\n> First I noticed the query would not use the timestamp index no matter what\n> session config settings I used. I finally created a temp table copy of the\n> table and verified index is used. Then I rebuilt the main table with\n> VACUUM FULL and this caused the index to be used.\n>\n\nWere they built with CONCURRENTLY? Do you have any long-open snapshots?\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, Dec 9, 2019 at 1:00 PM Jeremy Finzel <finzelj@gmail.com> wrote:I have a table with about 7 million records. I had a query in which I needed 2 indexes added, one for a created timestamp field another for an id field; both very high cardinality.First I noticed the query would not use the timestamp index no matter what session config settings I used. I finally created a temp table copy of the table and verified index is used. Then I rebuilt the main table with VACUUM FULL and this caused the index to be used.Were they built with CONCURRENTLY? Do you have any long-open snapshots?Cheers,Jeff",
"msg_date": "Mon, 9 Dec 2019 15:51:39 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index corruption / planner issue with one table in my pg 11.6\n instance"
},
{
"msg_contents": "On Mon, Dec 09, 2019 at 03:51:39PM -0500, Jeff Janes wrote:\n> On Mon, Dec 9, 2019 at 1:00 PM Jeremy Finzel <finzelj@gmail.com> wrote:\n>> I have a table with about 7 million records. I had a query in which I\n>> needed 2 indexes added, one for a created timestamp field another for an id\n>> field; both very high cardinality.\n>>\n>> First I noticed the query would not use the timestamp index no matter what\n>> session config settings I used. I finally created a temp table copy of the\n>> table and verified index is used. Then I rebuilt the main table with\n>> VACUUM FULL and this caused the index to be used.\n> \n> Were they built with CONCURRENTLY? Do you have any long-open snapshots?\n\nSomething new as of 11 is that btree indexes can be built in parallel,\nand before releasing it we found some bugs with covering indexes.\nPerhaps we have an issue hidden behind one of these, but hard to be\nsure. I have not seen that yet as of v11.\n--\nMichael",
"msg_date": "Tue, 10 Dec 2019 11:33:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Index corruption / planner issue with one table in my pg 11.6\n instance"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 6:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Something new as of 11 is that btree indexes can be built in parallel,\n> and before releasing it we found some bugs with covering indexes.\n> Perhaps we have an issue hidden behind one of these, but hard to be\n> sure.\n\nI doubt it.\n\nJeremy did not report queries that give wrong answers. He only said\nthat the optimizer refused to use one particular index, before a\nVACUUM FULL seemingly corrected the problem. OTOH, Jeremy did report\nusing contrib/amcheck on the index, which didn't complain. (Note also\nthat the amcheck functions will throw an error with an !indisvalid\nindex.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Dec 2019 18:56:26 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index corruption / planner issue with one table in my pg 11.6\n instance"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n\n> On Mon, Dec 9, 2019 at 6:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> Something new as of 11 is that btree indexes can be built in parallel,\n>> and before releasing it we found some bugs with covering indexes.\n>> Perhaps we have an issue hidden behind one of these, but hard to be\n>> sure.\n>\n> I doubt it.\n>\n> Jeremy did not report queries that give wrong answers. He only said\n> that the optimizer refused to use one particular index, before a\n> VACUUM FULL seemingly corrected the problem. OTOH, Jeremy did report\n> using contrib/amcheck on the index, which didn't complain. (Note also\n> that the amcheck functions will throw an error with an !indisvalid\n> index.)\n\nI suspect this was due to indcheckxmin=true for the involved index and\nthe documented (but IMO confusing) interplay w/broken hot-chains and\nvisibility.\n\nChecking the same DB today, I find 35 indexes across the entire system\nhaving indcheckxmin=true, including one on the same table, though not\nthe same index that Pg refused to use recently.\n\nMany of the indexes have very old xmins and thus s/b all considered in\nplans.\n\nI was able to get that remaining index out of the indcheckxmin=true list\nby...\n\n1. Reindexing $index (did not change anything)\n2. begin; drop; create; commit (still in the list but with a much newer\nxmin.)\n3. Vac-Full the table again (and now the index is gone from the\nindcheckxmin=true list.)\n\nPlease advise.\n\nThx\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Mon, 09 Dec 2019 23:10:36 -0600",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": false,
"msg_subject": "Re: Index corruption / planner issue with one table in my pg 11.6\n instance"
},
{
"msg_contents": "Jerry Sievers <gsievers19@comcast.net> writes:\n> I suspect this was due to indcheckxmin=true for the involved index and\n> the documented (but IMO confusing) interplay w/broken hot-chains and\n> visibility.\n\nYeah. The reported behavior can mostly be explained if we assume\nthat there's some HOT chain in the table that involves an update\nof this particular column, so that if we build an index on that\ncolumn we see a broken HOT chain, but building an index on some\nother column doesn't have a problem.\n\nThe thing this doesn't easily explain is that the behavior persists\nacross repeated index rebuilds. A broken HOT chain is only broken\nas long as the older entry is still visible-to-somebody, so that\nsuch situations ought to be self-healing as time passes. If it\nfails repeatedly, this theory requires assuming that either\n\n1. You've got some extremely old open transactions (maybe forgotten\nprepared transactions?), or\n\n2. Your workload is constantly generating new broken HOT chains of\nthe same sort, so that there's usually a live one when you try\nto build an index.\n\nThe fact that you even notice the indcheckxmin restriction indicates\nthat you do tend to have long-running transactions in the system,\nelse the index would come free for use fairly quickly. So #1 isn't\nas implausible as I might otherwise think. But #2 seems probably\nmore likely on the whole. OTOH, neither point is exactly within\nthe offered evidence.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Dec 2019 01:09:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index corruption / planner issue with one table in my pg 11.6\n instance"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 12:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yeah. The reported behavior can mostly be explained if we assume\n> that there's some HOT chain in the table that involves an update\n> of this particular column, so that if we build an index on that\n> column we see a broken HOT chain, but building an index on some\n> other column doesn't have a problem.\n>\n\nThe problem exists so far as I can tell on indexing *any column* of *this\nparticular table*. I tried same experiment on another table in the same\nreplication stream, and I cannot reproduce it.\n\nI am building the index **non-concurrently** every time.\n\n\n> The thing this doesn't easily explain is that the behavior persists\n> across repeated index rebuilds. A broken HOT chain is only broken\n> as long as the older entry is still visible-to-somebody, so that\n> such situations ought to be self-healing as time passes. If it\n> fails repeatedly, this theory requires assuming that either\n>\n> 1. You've got some extremely old open transactions (maybe forgotten\n> prepared transactions?), or\n>\n\nNo prepared_xacts and no transactions older than a few hours. Several hour\ntransactions are common in this reporting system. I have not yet seen if\nafter several hours the index starts showing up in plans.\n\n\n> 2. Your workload is constantly generating new broken HOT chains of\n> the same sort, so that there's usually a live one when you try\n> to build an index.\n>\n> The fact that you even notice the indcheckxmin restriction indicates\n> that you do tend to have long-running transactions in the system,\n> else the index would come free for use fairly quickly. So #1 isn't\n> as implausible as I might otherwise think. But #2 seems probably\n> more likely on the whole. OTOH, neither point is exactly within\n> the offered evidence.\n>\n\nIs there a way for me to test this theory? I tried the following with no\nchange in behavior:\n\n 1. Disable write load to table\n 2. Vacuum analyze table (not vac full)\n 3. Create index\n 4. Explain\n\nStill did not pick up the index.\n\nThanks,\nJeremy\n\nOn Tue, Dec 10, 2019 at 12:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Yeah. The reported behavior can mostly be explained if we assume\nthat there's some HOT chain in the table that involves an update\nof this particular column, so that if we build an index on that\ncolumn we see a broken HOT chain, but building an index on some\nother column doesn't have a problem.The problem exists so far as I can tell on indexing *any column* of *this particular table*. I tried same experiment on another table in the same replication stream, and I cannot reproduce it.I am building the index **non-concurrently** every time. The thing this doesn't easily explain is that the behavior persists\nacross repeated index rebuilds. A broken HOT chain is only broken\nas long as the older entry is still visible-to-somebody, so that\nsuch situations ought to be self-healing as time passes. If it\nfails repeatedly, this theory requires assuming that either\n\n1. You've got some extremely old open transactions (maybe forgotten\nprepared transactions?), orNo prepared_xacts and no transactions older than a few hours. Several hour transactions are common in this reporting system. I have not yet seen if after several hours the index starts showing up in plans. 2. Your workload is constantly generating new broken HOT chains of\nthe same sort, so that there's usually a live one when you try\nto build an index.\n\nThe fact that you even notice the indcheckxmin restriction indicates\nthat you do tend to have long-running transactions in the system,\nelse the index would come free for use fairly quickly. So #1 isn't\nas implausible as I might otherwise think. But #2 seems probably\nmore likely on the whole. OTOH, neither point is exactly within\nthe offered evidence.Is there a way for me to test this theory? I tried the following with no change in behavior:Disable write load to tableVacuum analyze table (not vac full)Create indexExplainStill did not pick up the index.Thanks,Jeremy",
"msg_date": "Tue, 10 Dec 2019 08:25:18 -0600",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index corruption / planner issue with one table in my pg 11.6\n instance"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 8:25 AM Jeremy Finzel <finzelj@gmail.com> wrote:\n\n> Is there a way for me to test this theory? I tried the following with no\n> change in behavior:\n>\n> 1. Disable write load to table\n> 2. Vacuum analyze table (not vac full)\n> 3. Create index\n> 4. Explain\n>\n> Still did not pick up the index.\n>\n\nJust another followup: with no other intervention on our part, after many\nhours the planner is picking up the index.\n\nI don't quite know what is causing it still, but is this behavior actually\ndesired? It's pretty inconvenient when trying to build an index for a\nquery need and immediately use it which used to work :).\n\nThanks,\nJeremy\n\nOn Tue, Dec 10, 2019 at 8:25 AM Jeremy Finzel <finzelj@gmail.com> wrote:Is there a way for me to test this theory? I tried the following with no change in behavior:Disable write load to tableVacuum analyze table (not vac full)Create indexExplainStill did not pick up the index.Just another followup: with no other intervention on our part, after many hours the planner is picking up the index.I don't quite know what is causing it still, but is this behavior actually desired? It's pretty inconvenient when trying to build an index for a query need and immediately use it which used to work :).Thanks,Jeremy",
"msg_date": "Thu, 12 Dec 2019 07:52:14 -0600",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index corruption / planner issue with one table in my pg 11.6\n instance"
}
] |
[
{
"msg_contents": "Hi,\n\nThe memory consumption of VACUUM has some issues and could be improved.\nSome of its limitations are recorded in the comments of the “vacuumlazy.c”\nfile. The current design of VACUUM memory usage is that it stores the TID\nin a fixed-size array which is allocated at the start, based upon\nmaintenance_work_mem. There are three problems with that design\n\n - If the value of maintenance_work_mem is too large then it is a waste of\nmemory for small tables.\n - If the value of maintenance_work_mem is too small or “TIDs” do not fit\nin the array then multiple scans happen.\n - In cases where maintainess_work_mem is set too large, and we have a\nbigger value of vacuume_count, then the system can be out-of-memory.\n\nThere are two solutions for these problems. The first is to use a list\ninstead of a fixed size array. The second solution is to allocate the\nmemory in chunks.\nThe attached WIP patch creates an array of ItemPointers and allocates\nmemory in chunks by dividing the maintenance_work_mem into multiple chunks.\n\n\nComments?\n--\nIbrar Ahmed",
"msg_date": "Mon, 9 Dec 2019 23:30:20 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "VACUUM memory management"
},
{
"msg_contents": "On 2019-Dec-09, Ibrar Ahmed wrote:\n\n> Hi,\n> \n> The memory consumption of VACUUM has some issues and could be improved.\n> Some of its limitations are recorded in the comments of the “vacuumlazy.c”\n> file. The current design of VACUUM memory usage is that it stores the TID\n> in a fixed-size array which is allocated at the start, based upon\n> maintenance_work_mem. There are three problems with that design\n\nDid you see this thread?\nhttps://postgr.es/m/CAGTBQpbDCaR6vv9=scXzuT8fSbckf=a3NgZdWFWZbdVugVht6Q@mail.gmail.com\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 9 Dec 2019 15:54:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 11:54 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Dec-09, Ibrar Ahmed wrote:\n>\n> > Hi,\n> >\n> > The memory consumption of VACUUM has some issues and could be improved.\n> > Some of its limitations are recorded in the comments of the\n> “vacuumlazy.c”\n> > file. The current design of VACUUM memory usage is that it stores the TID\n> > in a fixed-size array which is allocated at the start, based upon\n> > maintenance_work_mem. There are three problems with that design\n>\n> Did you see this thread?\n>\n> https://postgr.es/m/CAGTBQpbDCaR6vv9=scXzuT8fSbckf=a3NgZdWFWZbdVugVht6Q@mail.gmail.com\n>\n> Yes, and somehow did what is explained.\n\nRobert: \"What I think we need to do is make some provision to initially\nallocate only a small amount of memory and then grow the allocation\nlater if needed. For example, instead of having\nvacrelstats->dead_tuples be declared as ItemPointer, declare it as\nItemPointer * and allocate the array progressively in segments. I'd\nactually argue that the segment size should be substantially smaller\nthan 1 GB, like say 64MB; there are still some people running systems\nwhich are small enough that allocating 1 GB when we may need only 6\nbytes can drive the system into OOM.\"\n\nI change vacrelstats->dead_tuples to ItemPointer * and allocate small\nmemory and added more when needed. What I did new is\ndivide maintenance_work_mem in fixed-size chunks.\n\n\n\n\n\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nIbrar Ahmed\n\nOn Mon, Dec 9, 2019 at 11:54 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Dec-09, Ibrar Ahmed wrote:\n\n> Hi,\n> \n> The memory consumption of VACUUM has some issues and could be improved.\n> Some of its limitations are recorded in the comments of the “vacuumlazy.c”\n> file. The current design of VACUUM memory usage is that it stores the TID\n> in a fixed-size array which is allocated at the start, based upon\n> maintenance_work_mem. There are three problems with that design\n\nDid you see this thread?\nhttps://postgr.es/m/CAGTBQpbDCaR6vv9=scXzuT8fSbckf=a3NgZdWFWZbdVugVht6Q@mail.gmail.com\nYes, and somehow did what is explained.Robert: \"What I think we need to do is make some provision to initiallyallocate only a small amount of memory and then grow the allocationlater if needed. For example, instead of havingvacrelstats->dead_tuples be declared as ItemPointer, declare it asItemPointer * and allocate the array progressively in segments. I'dactually argue that the segment size should be substantially smallerthan 1 GB, like say 64MB; there are still some people running systemswhich are small enough that allocating 1 GB when we may need only 6bytes can drive the system into OOM.\"I change vacrelstats->dead_tuples to ItemPointer * and allocate small memory and added more when needed. What I did new is divide maintenance_work_mem in fixed-size chunks. \n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Ibrar Ahmed",
"msg_date": "Tue, 10 Dec 2019 00:02:08 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Mon, 9 Dec 2019 at 14:03, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> I'd\n> actually argue that the segment size should be substantially smaller\n> than 1 GB, like say 64MB; there are still some people running systems\n> which are small enough that allocating 1 GB when we may need only 6\n> bytes can drive the system into OOM.\"\n\nI don't even see why you would allocated as much as 64MB. I would\nthink something around 1MB would be more sensible. So you might need\nan array of segment pointers as long as a few thousand pointers, big\ndeal. We can handle repalloc on 8kB arrays pretty easily.\n\n-- \ngreg\n\n\n",
"msg_date": "Mon, 9 Dec 2019 16:36:15 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 4:37 PM Greg Stark <stark@mit.edu> wrote:\n> On Mon, 9 Dec 2019 at 14:03, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > I'd\n> > actually argue that the segment size should be substantially smaller\n> > than 1 GB, like say 64MB; there are still some people running systems\n> > which are small enough that allocating 1 GB when we may need only 6\n> > bytes can drive the system into OOM.\"\n>\n> I don't even see why you would allocated as much as 64MB. I would\n> think something around 1MB would be more sensible. So you might need\n> an array of segment pointers as long as a few thousand pointers, big\n> deal. We can handle repalloc on 8kB arrays pretty easily.\n\nSee https://www.postgresql.org/message-id/9bf3fe70-7aac-cbf7-62f7-acdaa4306ccb%40iki.fi\n\nAnother consideration is that, if we have parallel VACUUM, this all\nneeds to be done using DSM or DSA, neither of which is going to do a\nfantastic job with lots of 1MB allocations. If you allocate 1MB DSMs,\nyou'll run out of DSM slots. If you allocate 1MB chunks from DSA,\nit'll allocate progressively larger DSMs and give you 1MB chunks from\nthem. That's probably OK, but you're just wasting whatever memory from\nthe chunk you don't end up allocating.\n\nI suggested 64MB because I don't think many people these days run out\nof memory because VACUUM overshoots its required memory budget by a\nfew tens of megabytes. The problem is when it overruns by hundreds of\nmegabytes, and people would like large maintenance_work_mem settings\nwhere the overrun might be gigabytes.\n\nPerhaps there are contrary arguments, but I don't think the cost of\nrepalloc() is really the issue here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Dec 2019 09:28:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 2:02 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> Did you see this thread?\n>> https://postgr.es/m/CAGTBQpbDCaR6vv9=scXzuT8fSbckf=a3NgZdWFWZbdVugVht6Q@mail.gmail.com\n>>\n> Yes, and somehow did what is explained.\n\nDid you modify Claudio's patch or write a totally new one? In either\ncase, why did you choose that approach? If you wrote a totally new\none, have you compared your work with Claudio's, to see if he covered\nanything you might need to cover? Please explain why your patch is\nbetter/different than his.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Dec 2019 09:29:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 7:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Dec 9, 2019 at 2:02 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >> Did you see this thread?\n> >>\n> https://postgr.es/m/CAGTBQpbDCaR6vv9=scXzuT8fSbckf=a3NgZdWFWZbdVugVht6Q@mail.gmail.com\n> >>\n> > Yes, and somehow did what is explained.\n>\n> Did you modify Claudio's patch or write a totally new one?\n\n\nI wrote completely new patch. I tried multiple techniques like using a list\ninstead of fixed size array which I thought was most suitable here, but\nleave that because of conflict with Parallel Vacuum.\n\n\n> In either case, why did you choose that approach?\n\n\nThis is the simplest technique. I just divided the maintenance_work_mem in\nchunks and allocate chunks as needed. This technique change minimum code\nand do what we want to achieve.\n\n\n> If you wrote a totally new one, have you compared your work with\n> Claudio's, to see if he covered\n> anything you might need to cover?\n\n\nNo, this part I missed, I will do that and will share my thoughts.\n\n\n\n> Please explain why your patch is\n> better/different than his.\n>\n>\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Dec 11, 2019 at 7:29 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Dec 9, 2019 at 2:02 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> Did you see this thread?\n>> https://postgr.es/m/CAGTBQpbDCaR6vv9=scXzuT8fSbckf=a3NgZdWFWZbdVugVht6Q@mail.gmail.com\n>>\n> Yes, and somehow did what is explained.\n\nDid you modify Claudio's patch or write a totally new one? I wrote completely new patch. I tried multiple techniques like using a list instead of fixed size array which I thought was most suitable here, but leave that because of conflict with Parallel Vacuum. In either case, why did you choose that approach? This is the simplest technique. I just divided the maintenance_work_mem in chunks and allocate chunks as needed. This technique change minimum code and do what we want to achieve. If you wrote a totally new one, have you compared your work with Claudio's, to see if he covered\nanything you might need to cover? No, this part I missed, I will do that and will share my thoughts. Please explain why your patch is\nbetter/different than his.\n \n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- Ibrar Ahmed",
"msg_date": "Wed, 11 Dec 2019 21:29:17 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 9:29 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Wed, Dec 11, 2019 at 7:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Mon, Dec 9, 2019 at 2:02 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> >> Did you see this thread?\n>> >>\n>> https://postgr.es/m/CAGTBQpbDCaR6vv9=scXzuT8fSbckf=a3NgZdWFWZbdVugVht6Q@mail.gmail.com\n>> >>\n>> > Yes, and somehow did what is explained.\n>>\n>> Did you modify Claudio's patch or write a totally new one?\n>\n>\n> I wrote completely new patch. I tried multiple techniques like using a\n> list instead of fixed size array which I thought was most suitable here,\n> but leave that because of conflict with Parallel Vacuum.\n>\n>\n>> In either case, why did you choose that approach?\n>\n>\n> This is the simplest technique. I just divided the maintenance_work_mem in\n> chunks and allocate chunks as needed. This technique change minimum code\n> and do what we want to achieve.\n>\n>\n>> If you wrote a totally new one, have you compared your work with\n>> Claudio's, to see if he covered\n>> anything you might need to cover?\n>\n>\n> No, this part I missed, I will do that and will share my thoughts.\n>\n> I checked the patch, and it does not do anything special which my patch is\nnot doing except one thing. The patch is claiming to increase the limit of\n1GB along with that, but I have not touched that. In my case, we are still\nunder the limit of maintaines_work_mem but allocate memory in chunks. In\nthat case, you have the leverage to set a big value of maintaness_work_mem\n(even if you don't need that) because it will not allocate all the memory\nat the start.\n\nSecondly, the patch refactors the whole area of code which makes this patch\nlarger than expected. The code changes in the patch are almost doubled from\nmy patch. By the way, now I took the test cases from the patch and included\nthat into my patch (Credit Claudio)\n\n Please explain why your patch is\n>\n>> better/different than his.\n>>\n>>\n>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n>\n> --\n> Ibrar Ahmed\n>\n\n\n-- \nIbrar Ahmed",
"msg_date": "Fri, 13 Dec 2019 00:13:27 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "Hi Ibrar,\r\n\r\nAre you still working on this patch?\r\nCurrently the patch does not apply mainly because of\r\nrecent commits for parallel vacuum have updated the files in this patch.\r\nKindly rebase it and change the status to \"Needs Review\" after.\r\n\r\nUpon quick scan of another thread [1] mentioned above,\r\nI believe the people involved had consensus on the same direction\r\nof allocating mem in chunks, and dynamically alloc when\r\nneeded. A point for discussion was the size of chunk allocation.\r\n\r\nAfter a brief look of your patch, there's a typo between\r\ndeclaration and definition of lazy_vacuum_page():\r\narryindex --> arrindex\r\n\r\nstatic int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\r\n- int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\r\n+ int arryindex, int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\r\n\r\nstatic int\r\nlazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\r\n- int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer)\r\n+ int arrindex, int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer)\r\n\r\nUnnecessary change:\r\n- long maxtuples;\r\n- int vac_work_mem = IsAutoVacuumWorkerProcess() &&\r\n+ long maxtuples;\r\n+ int vac_work_mem = IsAutoVacuumWorkerProcess() &&\r\n\r\nOther typo:\r\n+ * pg_bsearch() -- bsearch algorithem for two dimention array.\r\nalgorithem --> algorithm\r\ndimention --> dimension\r\n\r\nI might have missed something more,\r\nbut I'll continue reviewing after the rebased patch.\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CAGTBQpbDCaR6vv9%3DscXzuT8fSbckf%3Da3NgZdWFWZbdVugVht6Q%40mail.gmail.com\r\n\n\n\n\n\n\n\n\n\nHi Ibrar,\n \nAre you still working on this patch?\nCurrently the patch does not apply mainly because of\nrecent commits for parallel vacuum have updated the files in this patch.\nKindly rebase it and change the status to \"Needs Review\" after.\n \nUpon quick scan of another thread [1] mentioned above,\nI believe the people involved had consensus on the same direction\nof allocating mem in chunks, and dynamically alloc when\nneeded. A point for discussion was the size of chunk allocation.\n \nAfter a brief look of your patch, there's a typo between\ndeclaration and definition of lazy_vacuum_page():\narryindex --> arrindex\n \nstatic int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n- int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\n+ int arryindex, int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\n \nstatic int\nlazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n- int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer)\n+ int arrindex, int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer)\n \nUnnecessary change:\n- long maxtuples;\n- int vac_work_mem = IsAutoVacuumWorkerProcess() &&\n+ long maxtuples;\n+ int vac_work_mem = IsAutoVacuumWorkerProcess() &&\n \nOther typo:\n+ * pg_bsearch() -- bsearch algorithem for two dimention array.\nalgorithem --> algorithm\ndimention --> dimension\n \nI might have missed something more,\nbut I'll continue reviewing after the rebased patch.\n \nRegards,\nKirk Jamison\n \n[1] https://www.postgresql.org/message-id/flat/CAGTBQpbDCaR6vv9%3DscXzuT8fSbckf%3Da3NgZdWFWZbdVugVht6Q%40mail.gmail.com",
"msg_date": "Wed, 22 Jan 2020 06:17:33 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: VACUUM memory management"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 11:17 AM k.jamison@fujitsu.com <\nk.jamison@fujitsu.com> wrote:\n\n> Hi Ibrar,\n>\n>\n>\n> Are you still working on this patch?\n>\n> Currently the patch does not apply mainly because of\n>\n> recent commits for parallel vacuum have updated the files in this patch.\n>\n> Kindly rebase it and change the status to \"Needs Review\" after.\n>\n>\n>\n> Upon quick scan of another thread [1] mentioned above,\n>\n> I believe the people involved had consensus on the same direction\n>\n> of allocating mem in chunks, and dynamically alloc when\n>\n> needed. A point for discussion was the size of chunk allocation.\n>\n>\n>\n> After a brief look of your patch, there's a typo between\n>\n> declaration and definition of lazy_vacuum_page():\n>\n> arryindex --> arrindex\n>\n>\n>\n> static int lazy_vacuum_page(Relation onerel, BlockNumber blkno,\n> Buffer buffer,\n>\n> -\n> int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\n>\n> +\n> int arryindex, int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\n>\n>\n>\n> static int\n>\n> lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n>\n> - int tupindex, LVRelStats\n> *vacrelstats, Buffer *vmbuffer)\n>\n> + int arrindex, int tupindex,\n> LVRelStats *vacrelstats, Buffer *vmbuffer)\n>\n>\n>\n> Unnecessary change:\n>\n> - long maxtuples;\n>\n> - int vac_work_mem =\n> IsAutoVacuumWorkerProcess() &&\n>\n> + long maxtuples;\n>\n> + int vac_work_mem = IsAutoVacuumWorkerProcess() &&\n>\n>\n>\n> Other typo:\n>\n> + * pg_bsearch() -- bsearch algorithem for two dimention array.\n>\n> algorithem --> algorithm\n>\n> dimention --> dimension\n>\n>\n>\n> I might have missed something more,\n>\n> but I'll continue reviewing after the rebased patch.\n>\n>\n>\n> Regards,\n>\n> Kirk Jamison\n>\n>\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAGTBQpbDCaR6vv9%3DscXzuT8fSbckf%3Da3NgZdWFWZbdVugVht6Q%40mail.gmail.com\n>\nHi,\nYes, I am working on that. I will send the rebased and updated patch.\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Jan 22, 2020 at 11:17 AM k.jamison@fujitsu.com <k.jamison@fujitsu.com> wrote:\n\n\nHi Ibrar,\n \nAre you still working on this patch?\nCurrently the patch does not apply mainly because of\nrecent commits for parallel vacuum have updated the files in this patch.\nKindly rebase it and change the status to \"Needs Review\" after.\n \nUpon quick scan of another thread [1] mentioned above,\nI believe the people involved had consensus on the same direction\nof allocating mem in chunks, and dynamically alloc when\nneeded. A point for discussion was the size of chunk allocation.\n \nAfter a brief look of your patch, there's a typo between\ndeclaration and definition of lazy_vacuum_page():\narryindex --> arrindex\n \nstatic int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n- int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\n+ int arryindex, int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);\n \nstatic int\nlazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n- int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer)\n+ int arrindex, int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer)\n \nUnnecessary change:\n- long maxtuples;\n- int vac_work_mem = IsAutoVacuumWorkerProcess() &&\n+ long maxtuples;\n+ int vac_work_mem = IsAutoVacuumWorkerProcess() &&\n \nOther typo:\n+ * pg_bsearch() -- bsearch algorithem for two dimention array.\nalgorithem --> algorithm\ndimention --> dimension\n \nI might have missed something more,\nbut I'll continue reviewing after the rebased patch.\n \nRegards,\nKirk Jamison\n \n[1] https://www.postgresql.org/message-id/flat/CAGTBQpbDCaR6vv9%3DscXzuT8fSbckf%3Da3NgZdWFWZbdVugVht6Q%40mail.gmail.com\n\n\nHi,Yes, I am working on that. I will send the rebased and updated patch.-- Ibrar Ahmed",
"msg_date": "Tue, 28 Jan 2020 23:36:24 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On 1/28/20 1:36 PM, Ibrar Ahmed wrote:\n> On Wed, Jan 22, 2020 at 11:17 AM k.jamison@fujitsu.com \n> I might have missed something more,____\n> \n> but I'll continue reviewing after the rebased patch.____\n> \n> Yes, I am working on that. I will send the rebased and updated patch.\n\nThis patch has not had any updates in months and now we are halfway \nthrough the CF so I have marked it Returned with Feedback.\n\nIf a patch arrives soon I'll be happy to revive the entry, otherwise \nplease submit to a future CF when a new patch is available.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 16 Mar 2020 09:34:57 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 6:35 PM David Steele <david@pgmasters.net> wrote:\n\n> On 1/28/20 1:36 PM, Ibrar Ahmed wrote:\n> > On Wed, Jan 22, 2020 at 11:17 AM k.jamison@fujitsu.com\n> > I might have missed something more,____\n> >\n> > but I'll continue reviewing after the rebased patch.____\n> >\n> > Yes, I am working on that. I will send the rebased and updated patch.\n>\n> This patch has not had any updates in months and now we are halfway\n> through the CF so I have marked it Returned with Feedback.\n>\n> If a patch arrives soon I'll be happy to revive the entry, otherwise\n> please submit to a future CF when a new patch is available.\n>\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n\nHere is the latest patch rebased with master\n(19db23bcbda99e93321cb0636677ec9c6e121a2a) Fri Apr 3 12:20:42 2020. Patch\nfix all the issues, after the parallel vacuum patch. The patch works in\ncase of a non-parallel option and allocates memory in chunks.\n\n-- \nIbrar Ahmed",
"msg_date": "Fri, 3 Apr 2020 21:04:34 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Fri, Apr 03, 2020 at 09:04:34PM +0500, Ibrar Ahmed wrote:\n> Here is the latest patch rebased with master\n> (19db23bcbda99e93321cb0636677ec9c6e121a2a) Fri Apr 3 12:20:42 2020. Patch\n> fix all the issues, after the parallel vacuum patch. The patch works in\n> case of a non-parallel option and allocates memory in chunks.\n\nThis patch seems to break vacuuming. On unpatched, master it scans the index:\n\n|postgres=# DROP TABLE t; CREATE UNLOGGED TABLE t AS SELECT generate_series(1,499999)a; CREATE INDEX ON t(a); SET maintenance_work_mem='1024kB'; UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t \\\\ UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t \\\\ UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t \\\\ UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t\n|...\n|INFO: vacuuming \"public.t\"\n|INFO: scanned index \"t_a_idx\" to remove 174472 row versions\n|...\n|INFO: index \"t_a_idx\" now contains 499999 row versions in 4119 pages\n|DETAIL: 499999 index row versions were removed.\n|...\n|INFO: \"t\": found 499999 removable, 499999 nonremovable row versions in 4425 out of 4425 pages\n\nWith this patch, if chunks are in use, it doesn't scan the indexes. Also, the\ntable is continuously growing, which means the heap vacuum is broken, too:\n public | t | table | pryzbyj | unlogged | 35 MB |\n public | t | table | pryzbyj | unlogged | 47 MB |\n public | t | table | pryzbyj | unlogged | 59 MB |\n public | t | table | pryzbyj | unlogged | 73 MB |\n\nIf chunks *aren't* in use (note smaller table), it looks like at least the\ndisplayed output is wrong for \"row versions\":\n|template1=# DROP TABLE t; CREATE UNLOGGED TABLE t AS SELECT generate_series(1,199999)a; CREATE INDEX ON t(a); SET maintenance_work_mem='1024kB'; UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t \\\\ UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t \\\\ UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t \\\\ UPDATE t SET a=1+a; VACUUM VERBOSE t; \\dt+ t\n|...\n|UPDATE 199999\n|INFO: vacuuming \"public.t\"\n|INFO: index \"t_a_idx\" now contains 0 row versions in 1099 pages\n|DETAIL: 0 index row versions were removed.\n\nThere's some warnings:\nvacuumlazy.c:2882:1: warning: ‘lazy_space_dealloc’ defined but not used [-Wunused-function]\nvacuumlazy.c:1883:5: warning: ‘tupindex’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n\nWhen you call lazy_vacuum_all_indexes() and then lazy_vacuum_heap(), you set\nvacrelstats->num_chunks = 0; but not vacrelstats->num_chunks = 0;. It seems to\nme, that means that only the *first* iteration of lazy_vacuum_* benefits from\nthe chunks, and the 2nd and later iterations not only don't benefit, but impose the cost\nof an index scan for each chunk, all but the last of which have nothing to do.\n\nThe patch is rebased on top of parallel vacuum implementation, but doesn't\nallow parallel vacuum to use \"chunks\", right ? I think it's important to\nhandle parallel vacuum somehow, which is the default for manual (but not auto)\nvacuum.\n\nI think max_tuples should be a property of LVRelStats rather than LVDeadTuples,\nsince it doesn't vary by chunk. Also, there's a weird thing which seems to be\nfor initializing new chunks, but happens even if you didn't add a chunk, and\nthen ends up setting the variable to itself:\n|int maxtuples = dead_tuples[num_chunks]->max_tuples;\n|...\n|dead_tuples[num_chunks]->max_tuples = maxtuples;\n\nMaybe num_tuples should be in LVRelStats, too, indicating the number of tuples\nfor dead_tuples[num_chunks]->itemptrs. Looks like that's how it was before\nparallel vacuum. I think now those would need to be in LVShared, too.\nThen, LVDeadTuples is nothing but a pointer. Maybe that would simplify\nsupporting this for parallel vacuum.\n\nAlso, num_tuples is a bad name, since it's also a local variable in\nlazy_scan_heap():\n| double num_tuples, /* total number of nonremovable tuples */\n\nThe patch changes to iterate N times over the indexes in lazy_vacuum_index.\nThat can be pretty expensive. If there are many dead tuples, this patch starts\nwith a small memory allocation and dynamically increases, but at the cost of\ndoing a multiple as much I/O. It's seems to be a bad tradeoff: if there's 10\nchunks, the first index scan will only handle 10% of what's needed.\n\nI wonder if it's possible to make bsearch() handle the list of lists, to allow\ndoing a single index scan per iteration, rather than num_batches. I don't\nthink it accesses the pointers themselves, but rather just calls the callback.\nYou could make the callback find offset1 and offset2, and compute which chunk\neach is in, and the offset within the chunk, and then do the comparison. Maybe\nthat's too clever and we should just include our own bsearch().\n\nvacuum only runs in parallel for index vacuum and cleanup (but not heap scan or\nheap vacuum). Right now, dead_tuples[0] is allocated in\nbegin_parallel_vacuum(), which is called at the beginning of lazy_scan_heap().\nI guess it's not possible to dynamically resize that, but is there any reason\nyou can't destroy it and recreate it as needed during heap scan? I guess one\nreason is that we want to avoid: 1) allocating a new DSM segment of size 2*N,\nin addition to the existing one of size N, then copy the original allocation to\nthe new allocation, then destroy the original. That means we have max memory\nuse of 3*N, not just 2*N :( Maybe overcommit/lazy allocation by the OS means\nthat's not always true... One way to do it would if if you gather N dead\ntuples, then trigger an index/heap vacuum, then destroy the dead_tuples and\nallocate a new one of twice the size (but no need to copy the old one). That\nstill incurs the cost of multiple (additional) index scans during the early\niterations when you have a small allocation, which isn't great.\n\nMarking this patch as RWF.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 4 Apr 2020 19:22:43 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM memory management"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 09:29:17PM +0500, Ibrar Ahmed wrote:\n> > Did you modify Claudio's patch or write a totally new one?\n> \n> I wrote completely new patch. I tried multiple techniques like using a list\n> instead of fixed size array which I thought was most suitable here, but\n> leave that because of conflict with Parallel Vacuum.\n\nUsing a list will hardly work, or certainly not well, since it needs to be\nsearched by the ambulkdelete callback.\n\n> >> If you wrote a totally new one, have you compared your work with\n> >> Claudio's, to see if he covered\n> >> anything you might need to cover?\n> >\n> > I checked the patch, and it does not do anything special which my patch is\n> not doing except one thing. The patch is claiming to increase the limit of\n> 1GB along with that, but I have not touched that. In my case, we are still\n> under the limit of maintaines_work_mem but allocate memory in chunks. In\n> that case, you have the leverage to set a big value of maintaness_work_mem\n> (even if you don't need that) because it will not allocate all the memory\n> at the start.\n\nAfter spending a bunch of time comparing them, I disagree. Claudio's patch\ndoes these:\n\n - avoid using multiple chunks if there's no indexes, therefore no need to\n avoid the high cost of index scans to avoid;\n - rather than doing an index scan for each chunk (bad), the callback function\n lazy_tid_reaped() does a custom binary search *over* chunks of different\n sizes and then *within* each chunk. That's maybe slighly over-engineered,\n I'm not convinced that's needed (but I thought it was pretty clever), but\n someone thought that was important.\n - properly keep track of *total* number of dead tuples, eg for progress\n reporting, and for prev_dead_count for pages with no dead tuples;\n - lazy_record_dead_tuple() doubles allocation when running out of space for\n dead tuples; some people disagree with that (myself included) but I'm\n including it here since that's what it does. This still seems nontrivial\n (to me) to adapt to work with parallel query.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 5 Apr 2020 15:26:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM memory management"
}
] |
[
{
"msg_contents": "Hi,\n\ntoday I observed (on a r5.24xlarge AWS RDS instance, i.e. 96 logical\ncores) lock contention on a buffer content lock due to taking of a\nSHARED lock (I think):\n\nThree tables were involved, simplified case:\n\nCREATE TABLE global_config (id BIGINT PRIMARY KEY);\n\nCREATE TABLE b (\n id BIGINT PRIMARY KEY,\n config_id BIGINT REFERENCES global_config (id)\n);\n\nCREATE TABLE c (\n id BIGINT PRIMARY KEY,\n config_id BIGINT REFERENCES global_config (id)\n);\n\n(I suppose having both b + c doesn't make a difference, but\nmaybe it was relevant, so I'm including it.)\n\nHeavy INSERT + UPDATE traffic on b + c (all trivial via id),\nSELECTs on global_config (again by id).\nAs the name suggests, there were only very few rows in\nglobal_config, specifically only one was referenced by all\nINSERT + UPDATEs on b + c.\n\nOn lighter load, all three types of queries were taking <1ms (as\nexpected), as load grew, all three went to ~50ms avg. execution time\ntogether. AWS RDS console showed wait on LWLock:buffer_content as the\nmain contribution to that time.\n\nChecking the code, I concluded that I observed lock contention\non the lock taken at the beginning of heap_lock_tuple, where\nan exclusive buffer content lock is held while recording the\nSHARE lock into the tuple and the WAL and the multiXact. I don't know\nthe actual number, but potentially up to 7000 active\ntransactions were holding a SHARE lock on that row, which could have\nperformance implications while scanning for multiXact memberships.\n\n\nSemantically, all that lock traffic was superfluous, as the\nglobal_config row's key was in no danger of being changed.\n\nAs this situation (some global, essentially static, entity is referenced\nby a much written table) seems not uncommon, I wonder:\n\n1. Does the above analysis sound about right?\n\n2. If so, would it be worthwhile to develop a solution?\n I was thinking along the lines of introducing an multiXact\n representation of \"everyone\": Instead of meticulously recording every\n locking + completing transaciton in a multiXact, after a certain\n number of transactions has accumulated in a single multiXact, it is\n approximated as \"everyone\". If later a transaction finds that a SHARE\n lock is held by \"everyone\", the tuple would need no further modification\n (not sure if this could even be checked without taking an exclusive\n buffer lock). The hard part would probably be to ensure that an\n attempt to obtain an EXCLUSIVE lock would finally succeed against a\n SHARE lock held by \"everyone\".\n\n\nRegards,\n Drahflow",
"msg_date": "Mon, 9 Dec 2019 23:10:36 +0100",
"msg_from": "Jens-Wolfhard Schicke-Uffmann <drahflow@gmx.de>",
"msg_from_op": true,
"msg_subject": "Contention on LWLock buffer_content, due to SHARED lock(?)"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 5:10 PM Jens-Wolfhard Schicke-Uffmann <\ndrahflow@gmx.de> wrote:\n\n> Hi,\n>\n> today I observed (on a r5.24xlarge AWS RDS instance, i.e. 96 logical\n> cores) lock contention on a buffer content lock due to taking of a\n> SHARED lock (I think):\n>\n\nWhat version of PostgreSQL are you using?\n\nCheers,\n\nJeff\n\nOn Mon, Dec 9, 2019 at 5:10 PM Jens-Wolfhard Schicke-Uffmann <drahflow@gmx.de> wrote:Hi,\n\ntoday I observed (on a r5.24xlarge AWS RDS instance, i.e. 96 logical\ncores) lock contention on a buffer content lock due to taking of a\nSHARED lock (I think):What version of PostgreSQL are you using?Cheers,Jeff",
"msg_date": "Tue, 10 Dec 2019 11:28:50 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Contention on LWLock buffer_content, due to SHARED lock(?)"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-09 23:10:36 +0100, Jens-Wolfhard Schicke-Uffmann wrote:\n> today I observed (on a r5.24xlarge AWS RDS instance, i.e. 96 logical\n> cores) lock contention on a buffer content lock due to taking of a\n> SHARED lock (I think):\n\n> Three tables were involved, simplified case:\n>\n> CREATE TABLE global_config (id BIGINT PRIMARY KEY);\n>\n> CREATE TABLE b (\n> id BIGINT PRIMARY KEY,\n> config_id BIGINT REFERENCES global_config (id)\n> );\n>\n> CREATE TABLE c (\n> id BIGINT PRIMARY KEY,\n> config_id BIGINT REFERENCES global_config (id)\n> );\n>\n> (I suppose having both b + c doesn't make a difference, but\n> maybe it was relevant, so I'm including it.)\n>\n> Heavy INSERT + UPDATE traffic on b + c (all trivial via id),\n> SELECTs on global_config (again by id).\n> As the name suggests, there were only very few rows in\n> global_config, specifically only one was referenced by all\n> INSERT + UPDATEs on b + c.\n>\n> On lighter load, all three types of queries were taking <1ms (as\n> expected), as load grew, all three went to ~50ms avg. execution time\n> together. AWS RDS console showed wait on LWLock:buffer_content as the\n> main contribution to that time.\n>\n> Checking the code, I concluded that I observed lock contention\n> on the lock taken at the beginning of heap_lock_tuple, where\n> an exclusive buffer content lock is held while recording the\n> SHARE lock into the tuple and the WAL and the multiXact. I don't know\n> the actual number, but potentially up to 7000 active\n> transactions were holding a SHARE lock on that row, which could have\n> performance implications while scanning for multiXact memberships.\n\nWhen you say \"7000 active transactions\" - do you mean to say that you\nhave set max_connections to something higher than that, and you actually\nhave that many concurrent transactions?\n\n\n> Semantically, all that lock traffic was superfluous, as the\n> global_config row's key was in no danger of being changed.\n\nWell, postgres can't know that.\n\n\n> As this situation (some global, essentially static, entity is referenced\n> by a much written table) seems not uncommon, I wonder:\n>\n> 1. Does the above analysis sound about right?\n\nHard to know without additional data.\n\n\n> 2. If so, would it be worthwhile to develop a solution?\n\nPossible, but I'm not sure it's worth the complexity.\n\nI'd definitely like to see a proper reproducer and profile for this,\nbefore investigating further.\n\n\n> I was thinking along the lines of introducing an multiXact\n> representation of \"everyone\": Instead of meticulously recording every\n> locking + completing transaciton in a multiXact, after a certain\n> number of transactions has accumulated in a single multiXact, it is\n> approximated as \"everyone\". If later a transaction finds that a SHARE\n> lock is held by \"everyone\", the tuple would need no further modification\n\nI think the big problem with a strategy like this is that it's prone to\ngenerate deadlocks that aren't present in the \"original\" scheduling.\n\n\n> (not sure if this could even be checked without taking an exclusive\n> buffer lock).\n\nIt should only require a share lock.\n\n\n> The hard part would probably be to ensure that an\n> attempt to obtain an EXCLUSIVE lock would finally succeed against a\n> SHARE lock held by \"everyone\".\n\nNote that this is a seriously complicated area of the code. It's very\neasy to create new bugs that aren't easily testable. I think we'd need a\nvery convincing use-case for improvements around the problem you outline\nand relatively simple solution, to counter stability concerns.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Dec 2019 08:44:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Contention on LWLock buffer_content, due to SHARED lock(?)"
},
{
"msg_contents": "On 2019-Dec-10, Andres Freund wrote:\n\n> > The hard part would probably be to ensure that an\n> > attempt to obtain an EXCLUSIVE lock would finally succeed against a\n> > SHARE lock held by \"everyone\".\n> \n> Note that this is a seriously complicated area of the code. It's very\n> easy to create new bugs that aren't easily testable. I think we'd need a\n> very convincing use-case for improvements around the problem you outline\n> and relatively simple solution, to counter stability concerns.\n\nI'd rather have the ability to mark a table READ ONLY (or similar).\nThen any FK references can skip the row locks altogether. For the rare\ncases where you need to modify the referenced table, have it marked READ\nWRITE, and any row locks are registered normally from that point on,\nuntil you set it back to READ ONLY again.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 10 Dec 2019 15:07:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Contention on LWLock buffer_content, due to SHARED lock(?)"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 10, 2019 at 08:44:17AM -0800, Andres Freund wrote:\n> > today I observed (on a r5.24xlarge AWS RDS instance, i.e. 96 logical\n> > cores) lock contention on a buffer content lock due to taking of a\n> > SHARED lock (I think):\n> When you say \"7000 active transactions\" - do you mean to say that you\n> have set max_connections to something higher than that, and you actually\n> have that many concurrent transactions?\nYes, max connections was 20000, active connections around 7000 at that\ntime. Unfortunately, I don't have actual numbers of connections in\ntransactions for that point in time. (We were trying to establish\nmaximum performance of a larger system.)\n\n> > Semantically, all that lock traffic was superfluous, as the\n> > global_config row's key was in no danger of being changed.\n> Well, postgres can't know that.\nI am aware; it's just an argument for why it might be possible to\nshove some optimization there.\n\n> > 1. Does the above analysis sound about right?\n> Hard to know without additional data.\nWhat data would be worth recording next time? (Except number of\nactive transactions, obviously.)\n\n\n> > 2. If so, would it be worthwhile to develop a solution?\n> Possible, but I'm not sure it's worth the complexity.\n>\n> I'd definitely like to see a proper reproducer and profile for this,\n> before investigating further.\nI'll see if and when I can include this into my client's project\nschedule. Might be a while, but I'll get back to you when I have\na reproducer + profile data (of an up-to-date vanilla Postgres,\nnot 10.7+AWS aurora patches).\n\n\n> I think we'd need a very convincing use-case for improvements around the problem\n> you outline.\nUnderstood. I'll try to get an iron-clad profile of the problematic case\nfirst.\n\n\nRegards,\n Drahflow",
"msg_date": "Tue, 10 Dec 2019 22:44:17 +0100",
"msg_from": "Jens-Wolfhard Schicke-Uffmann <drahflow@gmx.de>",
"msg_from_op": true,
"msg_subject": "Re: Contention on LWLock buffer_content, due to SHARED lock(?)"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 10, 2019 at 03:07:05PM -0300, Alvaro Herrera wrote:\n> I'd rather have the ability to mark a table READ ONLY (or similar).\n> Then any FK references can skip the row locks altogether. For the rare\n> cases where you need to modify the referenced table, have it marked READ\n> WRITE, and any row locks are registered normally from that point on,\n> until you set it back to READ ONLY again.\nHowever, that would require changes to applications writing to the table\nand a good understanding of performance characteristics by everyone\ntrying to get to that scale. (OTOH, there is certainly an argument to be\nmade that whoever hits this kind of problem better also has an idea of\npostgres performance tuning anyway.)\n\nMore troubling (to me) is that I already know of another table in the\nsystem which should be next-in-line for the same problem, but only on\nsome rows: It represents accounting entities, of which a very (nearly\nstatic) few are payment processors and all others are customers. From\nthe application's perspective there's not too much difference between\nthose, but any customer row will typically only be share locked once,\nwhereas share locks on payment processor rows will be held by most of\nthe transactions currently active.\n\nThat use-case is not very uncommon I think, so it migth be worthwhile\nto implement a solution which does not require all rows of a table to\nshare similar lock contention characteristics, or writability.\n\n\nRegards,\n Drahflow",
"msg_date": "Tue, 10 Dec 2019 23:08:34 +0100",
"msg_from": "Jens-Wolfhard Schicke-Uffmann <drahflow@gmx.de>",
"msg_from_op": true,
"msg_subject": "Re: Contention on LWLock buffer_content, due to SHARED lock(?)"
},
{
"msg_contents": "On 2019-Dec-10, Jens-Wolfhard Schicke-Uffmann wrote:\n\n> More troubling (to me) is that I already know of another table in the\n> system which should be next-in-line for the same problem, but only on\n> some rows: It represents accounting entities, of which a very (nearly\n> static) few are payment processors and all others are customers. From\n> the application's perspective there's not too much difference between\n> those, but any customer row will typically only be share locked once,\n> whereas share locks on payment processor rows will be held by most of\n> the transactions currently active.\n\nWell, you could partition that table. This probably means you'll need\nto improve Postgres implementation of PKs on partitioned tables, though.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 10 Dec 2019 19:12:52 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Contention on LWLock buffer_content, due to SHARED lock(?)"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-10 22:44:17 +0100, Jens-Wolfhard Schicke-Uffmann wrote:\n> On Tue, Dec 10, 2019 at 08:44:17AM -0800, Andres Freund wrote:\n> > > today I observed (on a r5.24xlarge AWS RDS instance, i.e. 96 logical\n> > > cores) lock contention on a buffer content lock due to taking of a\n> > > SHARED lock (I think):\n> > When you say \"7000 active transactions\" - do you mean to say that you\n> > have set max_connections to something higher than that, and you actually\n> > have that many concurrent transactions?\n> Yes, max connections was 20000, active connections around 7000 at that\n> time. Unfortunately, I don't have actual numbers of connections in\n> transactions for that point in time. (We were trying to establish\n> maximum performance of a larger system.)\n\nI'd strongly recommend changing your architecture. There's *severe*\noverhead in that many concurrent active connections (and some in such a\nhigh max_connections setting). It's likely that you'd be much much\nbetter off by putting in a query pooler in front that limits active\ntransaction to a significantly smaller number. There's only so many CPU\ncores, so at some point adding more concurrency just increases the\noverall amount of work that needs to be done (due to the overhead of\nmanaging concurrency and context switches).\n\n\n> > > 1. Does the above analysis sound about right?\n> > Hard to know without additional data.\n> What data would be worth recording next time? (Except number of\n> active transactions, obviously.)\n\nI think we'd need a CPU profile for starters. But that unfortunately\nwon't be possible on RDS...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Dec 2019 14:23:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Contention on LWLock buffer_content, due to SHARED lock(?)"
}
] |
[
{
"msg_contents": "Hi,\n\nI think there's a minor bug in pg_stat_activity tracking of walsender\nprocesses. The issue is that xact_start is only updated at the very\nbeginning when the walsender starts (so it's almost exactly equal to\nbackend_start) and then just flips between NULL and that value.\n\nReproducing this is trivial - just create a publication/subscription\nwith the built-in logical replication, and run arbitrary workload.\nYou'll see that the xact_start value never changes.\n\nI think the right fix is calling SetCurrentStatementStartTimestamp()\nright before StartTransactionCommand() in ReorderBufferCommit, per the\nattached patch.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 10 Dec 2019 00:44:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-10 00:44:09 +0100, Tomas Vondra wrote:\n> I think there's a minor bug in pg_stat_activity tracking of walsender\n> processes. The issue is that xact_start is only updated at the very\n> beginning when the walsender starts (so it's almost exactly equal to\n> backend_start) and then just flips between NULL and that value.\n> \n> Reproducing this is trivial - just create a publication/subscription\n> with the built-in logical replication, and run arbitrary workload.\n> You'll see that the xact_start value never changes.\n> \n> I think the right fix is calling SetCurrentStatementStartTimestamp()\n> right before StartTransactionCommand() in ReorderBufferCommit, per the\n> attached patch.\n\n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> index 53affeb877..5235fb31b8 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -1554,7 +1554,10 @@ ReorderBufferCommit(ReorderBuffer *rb, TransactionId xid,\n> \t\tif (using_subtxn)\n> \t\t\tBeginInternalSubTransaction(\"replay\");\n> \t\telse\n> +\t\t{\n> +\t\t\tSetCurrentStatementStartTimestamp();\n> \t\t\tStartTransactionCommand();\n> +\t\t}\n\nI'm quite doubtful this is useful. To me this seems to do nothing but\nadd the overhead of timestamp computation - which isn't always that\ncheap. I don't think you really can draw meaning from this?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Dec 2019 16:04:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in \n> Hi,\n> \n> I think there's a minor bug in pg_stat_activity tracking of walsender\n> processes. The issue is that xact_start is only updated at the very\n> beginning when the walsender starts (so it's almost exactly equal to\n> backend_start) and then just flips between NULL and that value.\n> \n> Reproducing this is trivial - just create a publication/subscription\n> with the built-in logical replication, and run arbitrary workload.\n> You'll see that the xact_start value never changes.\n> \n> I think the right fix is calling SetCurrentStatementStartTimestamp()\n> right before StartTransactionCommand() in ReorderBufferCommit, per the\n> attached patch.\n\nI'm not sure how much xact_start for walsender is useful and we really\nis not running a statement there. Also autovac launcher starts\ntransaction without a valid statement timestamp perhaps for the same\nreason.\n\nHowever, if we want to show something meaningful there, I think\ncommit_time might be more informative there. If we use\nGetCurrentTimestamp(), StartTransaction() already has the same feature\nfor autonomous transactions. I suppose we should do them a unified\nway.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 10 Dec 2019 09:42:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On Mon, Dec 09, 2019 at 04:04:40PM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2019-12-10 00:44:09 +0100, Tomas Vondra wrote:\n>> I think there's a minor bug in pg_stat_activity tracking of walsender\n>> processes. The issue is that xact_start is only updated at the very\n>> beginning when the walsender starts (so it's almost exactly equal to\n>> backend_start) and then just flips between NULL and that value.\n>>\n>> Reproducing this is trivial - just create a publication/subscription\n>> with the built-in logical replication, and run arbitrary workload.\n>> You'll see that the xact_start value never changes.\n>>\n>> I think the right fix is calling SetCurrentStatementStartTimestamp()\n>> right before StartTransactionCommand() in ReorderBufferCommit, per the\n>> attached patch.\n>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n>> index 53affeb877..5235fb31b8 100644\n>> --- a/src/backend/replication/logical/reorderbuffer.c\n>> +++ b/src/backend/replication/logical/reorderbuffer.c\n>> @@ -1554,7 +1554,10 @@ ReorderBufferCommit(ReorderBuffer *rb, TransactionId xid,\n>> \t\tif (using_subtxn)\n>> \t\t\tBeginInternalSubTransaction(\"replay\");\n>> \t\telse\n>> +\t\t{\n>> +\t\t\tSetCurrentStatementStartTimestamp();\n>> \t\t\tStartTransactionCommand();\n>> +\t\t}\n>\n>I'm quite doubtful this is useful. To me this seems to do nothing but\n>add the overhead of timestamp computation - which isn't always that\n>cheap. I don't think you really can draw meaning from this?\n>\n\nI don't want to use this timestamp directly, but it does interfere with\nmonitoring of long-running transactiosn looking at pg_stat_activity.\nWith the current behavior, the walsender entries have ancient timestamps\nand produce random blips in monitoring. Of course, it's possible to edit\nthe queries to skip entries with backend_type = walsender, but that's a\nbit inconvenient.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 10 Dec 2019 12:56:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 09:42:17AM +0900, Kyotaro Horiguchi wrote:\n>At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in\n>> Hi,\n>>\n>> I think there's a minor bug in pg_stat_activity tracking of walsender\n>> processes. The issue is that xact_start is only updated at the very\n>> beginning when the walsender starts (so it's almost exactly equal to\n>> backend_start) and then just flips between NULL and that value.\n>>\n>> Reproducing this is trivial - just create a publication/subscription\n>> with the built-in logical replication, and run arbitrary workload.\n>> You'll see that the xact_start value never changes.\n>>\n>> I think the right fix is calling SetCurrentStatementStartTimestamp()\n>> right before StartTransactionCommand() in ReorderBufferCommit, per the\n>> attached patch.\n>\n>I'm not sure how much xact_start for walsender is useful and we really\n>is not running a statement there. Also autovac launcher starts\n>transaction without a valid statement timestamp perhaps for the same\n>reason.\n>\n\nMaybe, but then maybe we should change it so that we don't report any\ntimestamps for such processes.\n\n>However, if we want to show something meaningful there, I think\n>commit_time might be more informative there. If we use\n>GetCurrentTimestamp(), StartTransaction() already has the same feature\n>for autonomous transactions. I suppose we should do them a unified\n>way.\n>\n\nI don't think so. We have this information from the apply side, and this\nis really about the *new* transaction started in reorderbuffer.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 10 Dec 2019 12:59:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On 2019-Dec-10, Tomas Vondra wrote:\n\n> On Tue, Dec 10, 2019 at 09:42:17AM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in\n\n> > I'm not sure how much xact_start for walsender is useful and we really\n> > is not running a statement there. Also autovac launcher starts\n> > transaction without a valid statement timestamp perhaps for the same\n> > reason.\n> \n> Maybe, but then maybe we should change it so that we don't report any\n> timestamps for such processes.\n\nYeah, I think we should to that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 10 Dec 2019 15:08:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-10 12:56:56 +0100, Tomas Vondra wrote:\n> On Mon, Dec 09, 2019 at 04:04:40PM -0800, Andres Freund wrote:\n> > On 2019-12-10 00:44:09 +0100, Tomas Vondra wrote:\n> > > I think there's a minor bug in pg_stat_activity tracking of walsender\n> > > processes. The issue is that xact_start is only updated at the very\n> > > beginning when the walsender starts (so it's almost exactly equal to\n> > > backend_start) and then just flips between NULL and that value.\n> > > \n> > > Reproducing this is trivial - just create a publication/subscription\n> > > with the built-in logical replication, and run arbitrary workload.\n> > > You'll see that the xact_start value never changes.\n> > > \n> > > I think the right fix is calling SetCurrentStatementStartTimestamp()\n> > > right before StartTransactionCommand() in ReorderBufferCommit, per the\n> > > attached patch.\n\n> > \n> > > diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> > > index 53affeb877..5235fb31b8 100644\n> > > --- a/src/backend/replication/logical/reorderbuffer.c\n> > > +++ b/src/backend/replication/logical/reorderbuffer.c\n> > > @@ -1554,7 +1554,10 @@ ReorderBufferCommit(ReorderBuffer *rb, TransactionId xid,\n> > > \t\tif (using_subtxn)\n> > > \t\t\tBeginInternalSubTransaction(\"replay\");\n> > > \t\telse\n> > > +\t\t{\n> > > +\t\t\tSetCurrentStatementStartTimestamp();\n> > > \t\t\tStartTransactionCommand();\n> > > +\t\t}\n> > \n> > I'm quite doubtful this is useful. To me this seems to do nothing but\n> > add the overhead of timestamp computation - which isn't always that\n> > cheap. I don't think you really can draw meaning from this?\n> > \n> \n> I don't want to use this timestamp directly, but it does interfere with\n> monitoring of long-running transactiosn looking at pg_stat_activity.\n> With the current behavior, the walsender entries have ancient timestamps\n> and produce random blips in monitoring. Of course, it's possible to edit\n> the queries to skip entries with backend_type = walsender, but that's a\n> bit inconvenient.\n\nOh, I'm not suggesting that we shouldn't fix this somehow, just that I'm\ndoubtful that that adding a lot of additional\nSetCurrentStatementStartTimestamp() calls is the right thing. Besides\nthe overhead, it'd also just not be a meaningful value here - neither is\nit an actual transaction, nor is it the right thing to be monitoring\nwhen concerned about bloat or such.\n\nIt seems like it might be better to instead cause NULL to be returned\nfor the respective column in pg_stat_activity etc?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Dec 2019 14:37:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On Wed, 11 Dec 2019 at 02:08, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Dec-10, Tomas Vondra wrote:\n>\n> > On Tue, Dec 10, 2019 at 09:42:17AM +0900, Kyotaro Horiguchi wrote:\n> > > At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> wrote in\n>\n> > > I'm not sure how much xact_start for walsender is useful and we really\n> > > is not running a statement there. Also autovac launcher starts\n> > > transaction without a valid statement timestamp perhaps for the same\n> > > reason.\n> >\n> > Maybe, but then maybe we should change it so that we don't report any\n> > timestamps for such processes.\n>\n> Yeah, I think we should to that.\n\n\nAgreed. Don't report a transaction start timestamp at all if we're not in a\nread/write txn in the walsender, which we should never be when using a\nhistoric snapshot.\n\nIt's not interesting or relevant.\n\nReporting the commit timestamp of the current or last-processed xact would\nlikely just be fonfusing. I'd rather see that in pg_stat_replication if\nwe're going to show it, that way we can label it usefully.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 11 Dec 2019 at 02:08, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Dec-10, Tomas Vondra wrote:\n\n> On Tue, Dec 10, 2019 at 09:42:17AM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in\n\n> > I'm not sure how much xact_start for walsender is useful and we really\n> > is not running a statement there. Also autovac launcher starts\n> > transaction without a valid statement timestamp perhaps for the same\n> > reason.\n> \n> Maybe, but then maybe we should change it so that we don't report any\n> timestamps for such processes.\n\nYeah, I think we should to that.Agreed. Don't report a transaction start timestamp at all if we're not in a read/write txn in the walsender, which we should never be when using a historic snapshot.It's not interesting or relevant.Reporting the commit timestamp of the current or last-processed xact would likely just be fonfusing. I'd rather see that in pg_stat_replication if we're going to show it, that way we can label it usefully.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 13 Dec 2019 13:05:41 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "At Fri, 13 Dec 2019 13:05:41 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in \n> On Wed, 11 Dec 2019 at 02:08, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> \n> > On 2019-Dec-10, Tomas Vondra wrote:\n> >\n> > > On Tue, Dec 10, 2019 at 09:42:17AM +0900, Kyotaro Horiguchi wrote:\n> > > > At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <\n> > tomas.vondra@2ndquadrant.com> wrote in\n> >\n> > > > I'm not sure how much xact_start for walsender is useful and we really\n> > > > is not running a statement there. Also autovac launcher starts\n> > > > transaction without a valid statement timestamp perhaps for the same\n> > > > reason.\n> > >\n> > > Maybe, but then maybe we should change it so that we don't report any\n> > > timestamps for such processes.\n> >\n> > Yeah, I think we should to that.\n> \n> \n> Agreed. Don't report a transaction start timestamp at all if we're not in a\n> read/write txn in the walsender, which we should never be when using a\n> historic snapshot.\n> \n> It's not interesting or relevant.\n> \n> Reporting the commit timestamp of the current or last-processed xact would\n> likely just be fonfusing. I'd rather see that in pg_stat_replication if\n> we're going to show it, that way we can label it usefully.\n\nSounds reasonable. By the way, the starting of this thread is a valid\nvalue in xact_timestample for a moment at the starting of logical\nreplication. (I couln't see it unless I inserted a sleep() in\nIndentifySystem()). I'm not sure but AFAIS it is the only instance in\nwalsendeer. Should we take the trouble to stop that? (I put -1 for it)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 13 Dec 2019 15:01:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On 2019-Dec-13, Kyotaro Horiguchi wrote:\n\n> At Fri, 13 Dec 2019 13:05:41 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in \n> > On Wed, 11 Dec 2019 at 02:08, Alvaro Herrera <alvherre@2ndquadrant.com>\n> > wrote:\n> > \n> > > On 2019-Dec-10, Tomas Vondra wrote:\n> > >\n> > > > On Tue, Dec 10, 2019 at 09:42:17AM +0900, Kyotaro Horiguchi wrote:\n> > > > > At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <\n> > > tomas.vondra@2ndquadrant.com> wrote in\n> > >\n> > > > > I'm not sure how much xact_start for walsender is useful and we really\n> > > > > is not running a statement there. Also autovac launcher starts\n> > > > > transaction without a valid statement timestamp perhaps for the same\n> > > > > reason.\n> > > >\n> > > > Maybe, but then maybe we should change it so that we don't report any\n> > > > timestamps for such processes.\n> > >\n> > > Yeah, I think we should to that.\n> > Agreed. Don't report a transaction start timestamp at all if we're not in a\n> > read/write txn in the walsender, which we should never be when using a\n> > historic snapshot.\n> > \n> > It's not interesting or relevant.\n\nThis patch changes xact.c to avoid updating transaction start timestamps\nfor walsenders (maybe more commentary is desirable). I think logical\ndecoding is just a special form of walsender and thus it would also be\nupdated by this patch, unless I misunderstood what Tomas explained.\n\n> > Reporting the commit timestamp of the current or last-processed xact would\n> > likely just be confusing. I'd rather see that in pg_stat_replication if\n> > we're going to show it, that way we can label it usefully.\n> \n> Sounds reasonable.\n\nDevelopers interested in this feature can submit a patch, as usual :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Dec 2019 16:46:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 04:46:18PM -0300, Alvaro Herrera wrote:\n>On 2019-Dec-13, Kyotaro Horiguchi wrote:\n>\n>> At Fri, 13 Dec 2019 13:05:41 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in\n>> > On Wed, 11 Dec 2019 at 02:08, Alvaro Herrera <alvherre@2ndquadrant.com>\n>> > wrote:\n>> >\n>> > > On 2019-Dec-10, Tomas Vondra wrote:\n>> > >\n>> > > > On Tue, Dec 10, 2019 at 09:42:17AM +0900, Kyotaro Horiguchi wrote:\n>> > > > > At Tue, 10 Dec 2019 00:44:09 +0100, Tomas Vondra <\n>> > > tomas.vondra@2ndquadrant.com> wrote in\n>> > >\n>> > > > > I'm not sure how much xact_start for walsender is useful and we really\n>> > > > > is not running a statement there. Also autovac launcher starts\n>> > > > > transaction without a valid statement timestamp perhaps for the same\n>> > > > > reason.\n>> > > >\n>> > > > Maybe, but then maybe we should change it so that we don't report any\n>> > > > timestamps for such processes.\n>> > >\n>> > > Yeah, I think we should to that.\n>> > Agreed. Don't report a transaction start timestamp at all if we're not in a\n>> > read/write txn in the walsender, which we should never be when using a\n>> > historic snapshot.\n>> >\n>> > It's not interesting or relevant.\n>\n>This patch changes xact.c to avoid updating transaction start timestamps\n>for walsenders (maybe more commentary is desirable). I think logical\n>decoding is just a special form of walsender and thus it would also be\n>updated by this patch, unless I misunderstood what Tomas explained.\n>\n\nIt's true walsender should not be doing any read-write transactions or\nexecuting statements (well, maybe a decoding plugin could, but using\nhistoric snapshot).\n\nSo I agree not leaving xact_start for walsender processes seems OK.\n\n>> > Reporting the commit timestamp of the current or last-processed xact would\n>> > likely just be confusing. I'd rather see that in pg_stat_replication if\n>> > we're going to show it, that way we can label it usefully.\n>>\n>> Sounds reasonable.\n>\n>Developers interested in this feature can submit a patch, as usual :-)\n>\n\n;-)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 29 Dec 2019 00:34:33 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On 2019-Dec-29, Tomas Vondra wrote:\n\n> On Fri, Dec 27, 2019 at 04:46:18PM -0300, Alvaro Herrera wrote:\n> > \n> > This patch changes xact.c to avoid updating transaction start timestamps\n> > for walsenders (maybe more commentary is desirable). I think logical\n> > decoding is just a special form of walsender and thus it would also be\n> > updated by this patch, unless I misunderstood what Tomas explained.\n> > \n> \n> It's true walsender should not be doing any read-write transactions or\n> executing statements (well, maybe a decoding plugin could, but using\n> historic snapshot).\n> \n> So I agree not leaving xact_start for walsender processes seems OK.\n\nOK, I pushed my patch to branches 10 - master.\n(See https://postgr.es/m/20200107211624.GA18974@alvherre.pgsql )\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Jan 2020 18:17:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> OK, I pushed my patch to branches 10 - master.\n> (See https://postgr.es/m/20200107211624.GA18974@alvherre.pgsql )\n\nThe buildfarm seems less than happy with this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 18:07:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "I wrote:\n> The buildfarm seems less than happy with this.\n\n... and, having now looked at the patch, I'm not surprised.\nBreaking stmtStartTimestamp, which is what you did, seems like\nan awfully side-effect-filled route to the goal. If you want\nto prevent monitoring from showing this, why didn't you just\nprevent monitoring from showing it? That is, I'd have expected\nsome am_walsender logic in or near pgstat.c, not here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 18:12:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On 2020-Jan-07, Tom Lane wrote:\n\n> I wrote:\n> > The buildfarm seems less than happy with this.\n> \n> ... and, having now looked at the patch, I'm not surprised.\n> Breaking stmtStartTimestamp, which is what you did, seems like\n> an awfully side-effect-filled route to the goal. If you want\n> to prevent monitoring from showing this, why didn't you just\n> prevent monitoring from showing it? That is, I'd have expected\n> some am_walsender logic in or near pgstat.c, not here.\n\nThat seems a pretty simple patch; attached (untested). However, my\npatch seemed a pretty decent way to achieve the goal, and I don't\nunderstand why it causes the failure, or indeed why we care about\nstmtStartTimestamp at all. I'll look into this again tomorrow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 7 Jan 2020 20:52:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-07, Tom Lane wrote:\n>> ... and, having now looked at the patch, I'm not surprised.\n>> Breaking stmtStartTimestamp, which is what you did, seems like\n>> an awfully side-effect-filled route to the goal. If you want\n>> to prevent monitoring from showing this, why didn't you just\n>> prevent monitoring from showing it? That is, I'd have expected\n>> some am_walsender logic in or near pgstat.c, not here.\n\n> That seems a pretty simple patch; attached (untested).\n\nI think you want && not ||, but otherwise that looks about right.\n\n> However, my\n> patch seemed a pretty decent way to achieve the goal, and I don't\n> understand why it causes the failure, or indeed why we care about\n> stmtStartTimestamp at all. I'll look into this again tomorrow.\n\nI'm not 100% sure why the failure either. The assertion is in\ncode that should only be reached in a parallel worker, and surely\nwalsenders don't launch parallel queries? But it looks to me\nthat all the critters using force_parallel_mode are unhappy.\n\nIn any case, my larger point is that stmtStartTimestamp is globally\naccessible state (via GetCurrentStatementStartTimestamp()) and you\ncan have little idea which corners of our code are using it, let\nalone what extensions might expect about it. Plus it feeds into\nxactStartTimestamp (cf StartTransaction()), increasing the footprint\nfor unwanted side-effects even more. Redefining its meaning\nto fix this problem is a really bad idea IMO.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 19:42:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "On 2020-Jan-07, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jan-07, Tom Lane wrote:\n\n> > That seems a pretty simple patch; attached (untested).\n> \n> I think you want && not ||, but otherwise that looks about right.\n\nThanks, you were right; pushed.\n\n> > However, my\n> > patch seemed a pretty decent way to achieve the goal, and I don't\n> > understand why it causes the failure, or indeed why we care about\n> > stmtStartTimestamp at all. I'll look into this again tomorrow.\n> \n> I'm not 100% sure why the failure either. The assertion is in\n> code that should only be reached in a parallel worker, and surely\n> walsenders don't launch parallel queries? But it looks to me\n> that all the critters using force_parallel_mode are unhappy.\n\nI reproduced here with force_parallel_mode=regress, and indeed what is\nhappening is that log.rep. subscription walsenders (???) are running\nqueries per commands/subscriptioncmds.c::fetch_table_list(), and under\nthat GUC they beget parallel workers; and because the parent has\nam_walsender=true then they pass a timestamp of 0 to the children; but\nthe children retain am_walsender=false, so the assertion fires.\n\nI didn't spend more time on that, but it seems strange and possibly\ndangerous, since am_walsender is used to implement some restrictions.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:12:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I'm not 100% sure why the failure either. The assertion is in\n>> code that should only be reached in a parallel worker, and surely\n>> walsenders don't launch parallel queries? But it looks to me\n>> that all the critters using force_parallel_mode are unhappy.\n\n> I reproduced here with force_parallel_mode=regress, and indeed what is\n> happening is that log.rep. subscription walsenders (???) are running\n> queries per commands/subscriptioncmds.c::fetch_table_list(), and under\n> that GUC they beget parallel workers; and because the parent has\n> am_walsender=true then they pass a timestamp of 0 to the children; but\n> the children retain am_walsender=false, so the assertion fires.\n\n> I didn't spend more time on that, but it seems strange and possibly\n> dangerous, since am_walsender is used to implement some restrictions.\n\nIndeed. I think it's a truly horrible idea that we are issuing SPI\nqueries inside replication mechanisms. Quite aside from this problem,\ndo we really think that's free of security issues? Or even if you\nthink it is today, can it be kept so?\n\n(I've ranted before about keeping a proper layering design in this\nstuff. Just because it's easier to do stuff by calling a SQL query\ndoesn't mean that we should consider that acceptable.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 13:38:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xact_start for walsender & logical decoding not updated"
}
] |
[
{
"msg_contents": "Hi,\n\nI several times, most recently for the record format in the undo\npatchset, wished for a fast variable width integer implementation for\npostgres. Using very narrow integers, for space efficiency, solves the\nspace usage problem, but leads to extensibility / generality problems.\n\nOther cases where I wanted a variable width integer encodings are, non\nexhaustively: variable-width TIDs, on-disk representation for a smaller\nSQL level integer, WAL format, wire protocol, ...\n\nI spent a few days experimenting with different variable length\nencodings for unsigned and signed integers. My goal is/was to find an\nencoding and a prototype that has reasonably low space overhead and that\nis very quick to encode/decode.\n\n\nThe probably most common encoding is often called LEB128. It basically\nuses 7 bits in each byte for data, and one bit as a \"continuation\"\nbit. While not bad space-wise, it's not great for decoding (but also\nencoding) speed. The problem is a) that there's a branch for each byte,\nwhich will result in poorly predictable code, b) the 7 byte base needs\nto be changed to an 8 byte base (not as expensive as the branching\nissue, but quite noticably).\n\nI think a better encoding is something similar to UTF-8. That is,\nencoding the byte length in unary in the leading byte, with the data\nfollowing that prefix (plus the separator ending the prefix). That\nallows to avoid per-byte branching during decoding.\n\nIn contrast to UTF-8, in my prototype I decided to encode the prefix in\nunary as the number of leading 0 bits - the main advantage is that that\nmakes it possible to use instructions like \"count leading zeroes\" to\ndetermine the length (otherwise one needs to invert the data first).\n\n\nTo encode negative numbers, I chose to use what's often called \"zigzag\"\nencoding. One cannot just naively use the above encoding for signed\nintegers, as otherwise two's complement negative numbers would always be\nof the maximum length (due to the leading 1 bit(s)). Instead the\nto-be-encoded number is multiplied by two (i.e. <<1), and the sign bit\nis stored at the lowest bit; negative numbers are additionally stored\nwith bit bits inverted.\n\nThe only disadvantage of that encoding that I am aware of is that\nencoded signed varints cannot just be memcmp'd (which they could with\ne.g. a single sign bit).\n\nAlternatively one could store the sign directly after the \"separator\nbit\" (the one ending the unary length indicator). But then there's the\nproblem that the number has leading data after the indicator. It's also\na bit harder to write the code for that encoding. I wrote the code for\nthat first, and I think it's very likely worth going for the simpler\napproach.\n\n\nIn my benchmarks encoding + decoding a varint in this way costs\napproximately 23 cycles with my intel skylake cpu. I think pipelining is\nhiding a bit of the latency, but not too much. As the benchmark\ncurrently is stupid and just encodes all 32bit integers, it's overly\nfriendly to the branch predictor however.\n\nEven with those caveats, I think that's a pretty good result. Other\nencodings were more expensive. And I think there's definitely some room\nfor optimization left.\n\n\n\nI've pushed a repo containing my initial implementation of the above to\nhttps://github.com/anarazel/varint_experiment and attached a tarball\ncontaining it. The header file contains a more detailed description of\nthe format.\n\nSo far I've only implemented uint64_t and int64_t encoding/decoding\nfunctions. It'd definitely make sense to implement a 4 byte variant, and\npotentially also one supporting larger lengths. I think it'd make sense\nto have a variant support length up to 64bit long unary length indicator\n(i.e. 64bytes of data (potentially minus one bit)).\n\nIf data lengths longer than that are required for a use case, it\nprobably is better to either a) use the max-representable 8 byte integer\nas an indicator that the length is stored or b) sacrifice another bit to\nrepresent whether the integer is the data itself or the length. Not sure\nif this is worthwhile.\n\n\nDo others see use in this? If so, does the encoding outlined above sound\nreasonable? I'm using a variant of this for a proposal for a different\nundo record format, but I really hope this would be interesting for\nothers too.\n\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 9 Dec 2019 17:50:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "A varint implementation for PG?"
},
{
"msg_contents": "On Tue, 10 Dec 2019 at 09:51, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> I several times, most recently for the record format in the undo\n> patchset, wished for a fast variable width integer implementation for\n> postgres. Using very narrow integers, for space efficiency, solves the\n> space usage problem, but leads to extensibility / generality problems.\n>\n\nYes. I've wanted flexible but efficiently packed integers quite a bit too,\nespecially when working with wire protocols.\n\nAm I stabbing completely in the dark when wondering if this might be a step\ntowards a way to lift the size limit on VARLENA Datums like bytea ?\n\nThere are obvious practical concerns with doing so, given that our protocol\noffers no handle based lazy fetching for big VARLENA values, but that too\nneeds a way to represent sizes sensibly and flexibly.\n\n\n> Even with those caveats, I think that's a pretty good result. Other\n> encodings were more expensive. And I think there's definitely some room\n> for optimization left.\n\n\nI don't feel at all qualified to question your analysis of the appropriate\nrepresentation. But your explanation certainly makes a lot of sense as\nsomeone approaching the topic mostly fresh - I've done a bit with BCD but\nnot much else.\n\nI assume we'd be paying a price in padding and alignment in most cases, and\nprobably more memory copying, but these representations would likely be\nappearing mostly in places where other costs are overwhelmingly greater\nlike network or disk I/O.\n\nIf data lengths longer than that are required for a use case\n\n\nIf baking a new variant integer format now, I think limiting it to 64 bits\nis probably a mistake given how long-lived PostgreSQL is, and how hard it\ncan be to change things in the protocol, on disk, etc.\n\n\n> it\n> probably is better to either a) use the max-representable 8 byte integer\n> as an indicator that the length is stored or b) sacrifice another bit to\n> represent whether the integer is the data itself or the length.\n>\n\nI'd be inclined to suspect that (b) is likely worth doing. If nothing else\nbecause not being able to represent the full range of a 64-bit integer in\nthe variant type is potentially going to be a seriously annoying hassle at\npoints where we're interacting with places that could use the full width.\nWe'd then have the potential for variant integers of > 2^64 but at least\nthat's wholly under our control.\n\nI also routinely underestimate how truly huge a 64-bit integer really is.\nBut even now 8 petabytes isn't as inconceivable as it used to be....\n\nIt mostly depends on how often you expect you'd be coming up on the\nboundaries where the extra bit would push you up a variant size.\n\nDo others see use in this?\n\n\nYes. Very, very much yes.\n\nI'd be quick to want to expose it to SQL too.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 10 Dec 2019 at 09:51, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nI several times, most recently for the record format in the undo\npatchset, wished for a fast variable width integer implementation for\npostgres. Using very narrow integers, for space efficiency, solves the\nspace usage problem, but leads to extensibility / generality problems.Yes. I've wanted flexible but efficiently packed integers quite a bit too, especially when working with wire protocols.Am I stabbing completely in the dark when wondering if this might be a step towards a way to lift the size limit on VARLENA Datums like bytea ? There are obvious practical concerns with doing so, given that our protocol offers no handle based lazy fetching for big VARLENA values, but that too needs a way to represent sizes sensibly and flexibly. Even with those caveats, I think that's a pretty good result. Other\nencodings were more expensive. And I think there's definitely some room\nfor optimization left.I don't feel at all qualified to question your analysis of the appropriate representation. But your explanation certainly makes a lot of sense as someone approaching the topic mostly fresh - I've done a bit with BCD but not much else. I assume we'd be paying a price in padding and alignment in most cases, and probably more memory copying, but these representations would likely be appearing mostly in places where other costs are overwhelmingly greater like network or disk I/O.\nIf data lengths longer than that are required for a use caseIf baking a new variant integer format now, I think limiting it to 64 bits is probably a mistake given how long-lived PostgreSQL is, and how hard it can be to change things in the protocol, on disk, etc. it\nprobably is better to either a) use the max-representable 8 byte integer\nas an indicator that the length is stored or b) sacrifice another bit to\nrepresent whether the integer is the data itself or the length.I'd be inclined to suspect that (b) is likely worth doing. If nothing else because not being able to represent the full range of a 64-bit integer in the variant type is potentially going to be a seriously annoying hassle at points where we're interacting with places that could use the full width. We'd then have the potential for variant integers of > 2^64 but at least that's wholly under our control.I also routinely underestimate how truly huge a 64-bit integer really is. But even now 8 petabytes isn't as inconceivable as it used to be....It mostly depends on how often you expect you'd be coming up on the boundaries where the extra bit would push you up a variant size.Do others see use in this?Yes. Very, very much yes.I'd be quick to want to expose it to SQL too. -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 13 Dec 2019 13:31:55 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-13 13:31:55 +0800, Craig Ringer wrote:\n> Am I stabbing completely in the dark when wondering if this might be a step\n> towards a way to lift the size limit on VARLENA Datums like bytea ?\n\nIt could be - but I think it'd be a pretty small piece of it. But yes, I\nhave mused about that.\n\n\n\n> > Even with those caveats, I think that's a pretty good result. Other\n> > encodings were more expensive. And I think there's definitely some room\n> > for optimization left.\n> \n> \n> I don't feel at all qualified to question your analysis of the appropriate\n> representation. But your explanation certainly makes a lot of sense as\n> someone approaching the topic mostly fresh - I've done a bit with BCD but\n> not much else.\n> \n> I assume we'd be paying a price in padding and alignment in most cases, and\n> probably more memory copying, but these representations would likely be\n> appearing mostly in places where other costs are overwhelmingly greater\n> like network or disk I/O.\n\nI don't really see where padding/alignment costs come into play here?\n\n\n\n> If data lengths longer than that are required for a use case\n> \n> \n> If baking a new variant integer format now, I think limiting it to 64 bits\n> is probably a mistake given how long-lived PostgreSQL is, and how hard it\n> can be to change things in the protocol, on disk, etc.\n\nI don't think it's ever going to be sensible to transport 64bit quanta\nof data. Also, uh, it'd be larger than the data a postgres instance\ncould really contain, given LSNs are 64 bit.\n\n\n\n> > it\n> > probably is better to either a) use the max-representable 8 byte integer\n> > as an indicator that the length is stored or b) sacrifice another bit to\n> > represent whether the integer is the data itself or the length.\n\n> I'd be inclined to suspect that (b) is likely worth doing. If nothing else\n> because not being able to represent the full range of a 64-bit integer in\n> the variant type is potentially going to be a seriously annoying hassle at\n> points where we're interacting with places that could use the full width.\n> We'd then have the potential for variant integers of > 2^64 but at least\n> that's wholly under our control.\n\nI'm very very staunchly against doing either of these for the varints\nused widely. Throwing away even a bit is quite painful, as it\ne.g. reduces the range representable in a single byte from 0 - 127/-64 -\n63 to 0 - 63/-32 - 31. Without ever being useful, given what kind of\nthings varints are commonly going to describe. There's e.g. simply no\npractical use of describing a single WAL record length that's bigger\nthan 63 bit can represent.\n\nI *can* see a separate varint type, probably sharing some code, that\nsupports storing arbitrarily large numbers. But using that everywhere\nwould be pointless.\n\n\n> I'd be quick to want to expose it to SQL too.\n\nIt'll be a bit problmeatic to deal with all the casting necessary, and\nwith the likely resulting overload resolution issues. I'm wondering\nwhether it'd be worthwhile to have a ALTER TABLE ... STORAGE ... option\nthat encodes int2/4/8 as varints when inside a tuple, but otherwise just\nlet it be a normal integer.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Dec 2019 21:45:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "[ resurrecting this 2-year-old thread ]\n\nOn Fri, Dec 13, 2019 at 12:45 AM Andres Freund <andres@anarazel.de> wrote:\n> > If baking a new variant integer format now, I think limiting it to 64 bits\n> > is probably a mistake given how long-lived PostgreSQL is, and how hard it\n> > can be to change things in the protocol, on disk, etc.\n>\n> I don't think it's ever going to be sensible to transport 64bit quanta\n> of data. Also, uh, it'd be larger than the data a postgres instance\n> could really contain, given LSNs are 64 bit.\n\nWe already use 128-bit integers in some places in the source code, and\nit seems more likely than not that use of 128-bit integer data types\nwill continue to expand over time. If we want to represent 64 bit\nintegers in a variable number of bits today, it does seem pretty\nlikely that some day we will want to do the same thing with 128 bit\nintegers, and maybe eventually larger. And I think that all of that is\ntrue even though exhausting a 64-bit LSN space looks difficult today\nand will likely remain difficult for the foreseeable future.\n\nSo to me, the right thing to do here probably depends a bit on the use\ncase. If we are talking about using this for strictly internal\npurposes - e.g. cramming stuff into an internal data structure into\nwhich users have no direct visibility - then I think it's probably\nbest to pick a representation that corresponds exactly to the C data\ntype that we're trying to store. If we're starting out with a int64\nand we want to byte-squeeze it, the representation you've chosen here\nseems like just the thing. And it's easy to imagine why we might also\nwant to have similar transformations for uint64, int32, and uint32,\nand maybe eventually int128 and uint128. And if the details of the\nformat are a little different in each case that's fine; for these\nsorts of use cases we're always going to unpack into the same data\ntype that we packed originally, so it's not important to have exact\ncompatibility across differences in signedness or bit count.\n\nHowever, I suspect that the whole approach should be completely\nrevised for a user-visible data type. On the one hand, there's no\ntelling how large a value some user will want to represent, so\nlimiting ourselves to 64 bits does seem shortsighted. And on the othe\nhand, if we've got a varlena, we already know the length, so it seems\nlike we shouldn't also encode the length in the value. Maybe there's a\nmore efficient way, but the first thing that occurs to me is to just\ndiscard high order bytes that are all zeroes or all ones until the\nhigh order bit of the next byte doesn't match and plonk the remaining\nbytes into the varlena. To decompress, just sign-extend out to the\ntarget length. Really, this kind of representation can be extended to\nrepresent arbitrarily large integers, even bigger than what we can\ncurrently do with numeric, which is already crazy huge, and it seems\nto have some advantage in that every payload byte contains exactly 8\ndata bits, so we don't need to shift or mask while encoding and\ndecoding.\n\nNow, we could think of introducing a new format for variable-length\ndatums, essentially making this a new typlen rather than a new kind of\nvarlena. That might be worth it, because if you are storing a lot of\nvalues that are small enough that this format could represent them in\n3 bytes or less, which I think would be everything up to +/- 2^20, you\ncould save a pretty significant amount of space even if your table was\nlaid out to avoid alignment padding. However, there would be some\ndistributed overhead to this, because all the places that have special\nhandling for typlen = -1 and typlen = -2 would need to grow new cases\nfor this. I'm not sure how much of a problem that is, really, but I\ndon't think we can go nuts with adding new typlen values.\n\n> It'll be a bit problmeatic to deal with all the casting necessary, and\n> with the likely resulting overload resolution issues. I'm wondering\n> whether it'd be worthwhile to have a ALTER TABLE ... STORAGE ... option\n> that encodes int2/4/8 as varints when inside a tuple, but otherwise just\n> let it be a normal integer.\n\nI don't see how a STORAGE option could work. We tend to treat those as\nhints, rather than critical data, which wouldn't work here. I think\nthere are a number of problems that crop up, but one of them is the\nsame thing we hit with the custom TOAST compression stuff. If you need\nto make a value of some row type out of a HeapTuple, you're not going\nto know which settings were used to create that heap tuple, and you're\ncertainly not going to know that when you go to deform that tuple. The\nonly thing you're going to have is the tuple descriptor, which AFAICS\nmeans that the representation needs to be a property of the type, not\nwhere the value is stored. Maybe you have a clever idea I'm not\nseeing.\n\nAs for the casting and overloading issues, that's tripped up quite a\nfew people now when trying to add new numeric data types - unsigned\nintegers being a case that has come up a few times now, I think. I\ndon't think it has to be a blocker. I think the solution is probably\nto accept that using unsigned or variable-width data types will\nrequire inserting more casts than would be required for integer\ndatatypes and numeric. That may not thrill everybody, but it may still\nbe better than deciding we're never ever ever adding any more data\ntypes for storing numbers than we have today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Aug 2021 14:26:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-03 14:26:16 -0400, Robert Haas wrote:\n> [ resurrecting this 2-year-old thread ]\n> On Fri, Dec 13, 2019 at 12:45 AM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think it's ever going to be sensible to transport 64bit quanta\n> > of data. Also, uh, it'd be larger than the data a postgres instance\n> > could really contain, given LSNs are 64 bit.\n> \n> We already use 128-bit integers in some places in the source code, and\n> it seems more likely than not that use of 128-bit integer data types\n> will continue to expand over time. If we want to represent 64 bit\n> integers in a variable number of bits today, it does seem pretty\n> likely that some day we will want to do the same thing with 128 bit\n> integers, and maybe eventually larger. And I think that all of that is\n> true even though exhausting a 64-bit LSN space looks difficult today\n> and will likely remain difficult for the foreseeable future.\n\nYea, my answer here was intended to be about areas where we \"bake in\", to\nquote Craig, the varint format, like varlena tags or protocol messages. For\nthose a 64bit length tag seems entirely sufficient.\n\nI think e.g. adding an SQL level variable width integer type would potentially\ncome to a different outcome. But even for things like that I suspect that more\noften than not you'd be better off with a variable length length value, rather\nthan encoding the whole \"huge\" value as a single variable width integer.\n\n\n> So to me, the right thing to do here probably depends a bit on the use\n> case. If we are talking about using this for strictly internal\n> purposes - e.g. cramming stuff into an internal data structure into\n> which users have no direct visibility - then I think it's probably\n> best to pick a representation that corresponds exactly to the C data\n> type that we're trying to store. If we're starting out with a int64\n> and we want to byte-squeeze it, the representation you've chosen here\n> seems like just the thing. And it's easy to imagine why we might also\n> want to have similar transformations for uint64, int32, and uint32,\n> and maybe eventually int128 and uint128. And if the details of the\n> format are a little different in each case that's fine; for these\n> sorts of use cases we're always going to unpack into the same data\n> type that we packed originally, so it's not important to have exact\n> compatibility across differences in signedness or bit count.\n\nAgreed.\n\n\n> However, I suspect that the whole approach should be completely\n> revised for a user-visible data type. On the one hand, there's no\n> telling how large a value some user will want to represent, so\n> limiting ourselves to 64 bits does seem shortsighted. And on the othe\n> hand, if we've got a varlena, we already know the length, so it seems\n> like we shouldn't also encode the length in the value.\n\nIf we're talking varlenas, then I think using embedded varints only really\nmakes sense if they're \"sub-components\" of that varlena, not the entire\nvalue...\n\n\n> Now, we could think of introducing a new format for variable-length\n> datums, essentially making this a new typlen rather than a new kind of\n> varlena. That might be worth it, because if you are storing a lot of\n> values that are small enough that this format could represent them in\n> 3 bytes or less, which I think would be everything up to +/- 2^20, you\n> could save a pretty significant amount of space even if your table was\n> laid out to avoid alignment padding. However, there would be some\n> distributed overhead to this, because all the places that have special\n> handling for typlen = -1 and typlen = -2 would need to grow new cases\n> for this. I'm not sure how much of a problem that is, really, but I\n> don't think we can go nuts with adding new typlen values.\n\nYes - the branches for the different typelens already are quite visible in\nprofiles... It might be that we could compensate for that without too much\ndifficulty (e.g. by having a field in a tupdesc indicating what kind of types\nare in use in a tuple to be formed/deformed, and dispatching to different\nform/deform routines based on that), but it's not obviously a win.\n\n\n\n> > It'll be a bit problmeatic to deal with all the casting necessary, and\n> > with the likely resulting overload resolution issues. I'm wondering\n> > whether it'd be worthwhile to have a ALTER TABLE ... STORAGE ... option\n> > that encodes int2/4/8 as varints when inside a tuple, but otherwise just\n> > let it be a normal integer.\n> \n> I don't see how a STORAGE option could work. We tend to treat those as\n> hints, rather than critical data, which wouldn't work here.\n\nI wasn't thinking that this would be something that could be changed without a\ntable rewrite - so maybe STORAGE would be the wrong place to put it, given how\nit's currently used. OTOH, I don't think it'd be that big a problem to have a\nrewrite for some, but not all STORAGE options...\n\n\n> I think there are a number of problems that crop up, but one of them is the\n> same thing we hit with the custom TOAST compression stuff. If you need to\n> make a value of some row type out of a HeapTuple, you're not going to know\n> which settings were used to create that heap tuple, and you're certainly not\n> going to know that when you go to deform that tuple. The only thing you're\n> going to have is the tuple descriptor, which AFAICS means that the\n> representation needs to be a property of the type, not where the value is\n> stored. Maybe you have a clever idea I'm not seeing.\n\nAll the information needed for deforming fields is a property of pg_attribute,\nnot pg_type (because the type could have been dropped after the column was\ndropped, but we still need to skip over the column). So if one accepts needing\nrewrites for changing the data encoding, I don't think there'd be a huge\nissue here.\n\nI think the compression stuff is a bit different because you want to prevent\ncompressed values of one type wandering into another table with a different\nstorage type. That problem wouldn't really exist for an SQL level varint as I\nwas imagining them - they'd always be deformed into the \"normal\" int2/4/8\nDatum representation outside of a HeapTuple. We obviously don't want to do\nthat for varlenas, because they need to be fetched from out-of-line, which is\nseveral orders of magnitude more expensive than decoding a varint.\n\n\nI am now wondering if what we're talking about here would best be thought of\nnot as a variable width integer type, but a variable-width encoding for all\npass-by-value types.\n\nLeaving on-disk compatibility aside (:)), ISTM that we by default could use\nthe following heuristic to decide how to encode pass-by-value types: If it's a\nleading fixed-width NOT NULL type, store it in fixed-length\nencoding. Otherwise use a variable-length encoding.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Aug 2021 12:32:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> However, I suspect that the whole approach should be completely\n> revised for a user-visible data type. On the one hand, there's no\n> telling how large a value some user will want to represent, so\n> limiting ourselves to 64 bits does seem shortsighted. And on the othe\n> hand, if we've got a varlena, we already know the length, so it seems\n> like we shouldn't also encode the length in the value. Maybe there's a\n> more efficient way, but the first thing that occurs to me is to just\n> discard high order bytes that are all zeroes or all ones until the\n> high order bit of the next byte doesn't match and plonk the remaining\n> bytes into the varlena. To decompress, just sign-extend out to the\n> target length. Really, this kind of representation can be extended to\n> represent arbitrarily large integers, even bigger than what we can\n> currently do with numeric, which is already crazy huge, and it seems\n> to have some advantage in that every payload byte contains exactly 8\n> data bits, so we don't need to shift or mask while encoding and\n> decoding.\n\n+1. I think this, together with our existing rules for varlena headers,\nwould address the issue quite nicely. Any sanely-sized integer would\nrequire only a one-byte header, so the minimum on-disk size is 2 bytes\n(with no alignment padding required). I don't buy that there's enough\nneed to justify inventing a new typlen code, since even if you did it\nwouldn't improve things all that much compared to this design.\n\n(Oh ... actually the minimum on-disk size is one byte, since value zero\nwould require no payload bytes.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Aug 2021 15:39:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 3:32 PM Andres Freund <andres@anarazel.de> wrote:\n> I am now wondering if what we're talking about here would best be thought of\n> not as a variable width integer type, but a variable-width encoding for all\n> pass-by-value types.\n>\n> Leaving on-disk compatibility aside (:)), ISTM that we by default could use\n> the following heuristic to decide how to encode pass-by-value types: If it's a\n> leading fixed-width NOT NULL type, store it in fixed-length\n> encoding. Otherwise use a variable-length encoding.\n\nThis is pretty integer-centric, though. If your pass-by-value type is\nstoring timestamps, for example, they're not likely to be especially\nclose to zero. Since a 64-bit address is pretty big, perhaps they're\nstill close enough to zero that this will work out to a win, but I\ndon't know, that seems a bit cheesy. I grant that it could work out to\na win -- pass-by-value data types whose distribution is very different\nfrom what's typical for integers, or for that matter columns full of\nintegers that all happen to be toward the extreme values the data type\ncan store, are probably not that common. I just don't really like\nmaking such assumptions on a system-wide basis (as opposed to a\nper-datatype basis where it's easier to reason about the\nconsequences).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Aug 2021 09:31:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-04 09:31:25 -0400, Robert Haas wrote:\n> This is pretty integer-centric, though. If your pass-by-value type is\n> storing timestamps, for example, they're not likely to be especially\n> close to zero. Since a 64-bit address is pretty big, perhaps they're\n> still close enough to zero that this will work out to a win, but I\n> don't know, that seems a bit cheesy.\n\nYea, that's fair. The really bad™ example probably is negative numbers - which\nwouldn't be easy to do something about in a datatype agnostic way.\n\n\n> I grant that it could work out to a win -- pass-by-value data types whose\n> distribution is very different from what's typical for integers, or for that\n> matter columns full of integers that all happen to be toward the extreme\n> values the data type can store, are probably not that common.\n\nIt'd work out as a wash for common timestamps:\n\n./varint_test -u 681413261095983\nprocessing unsigned\nunsigned:\t681413261095983\n input bytes:\t 00 02 6b bd e3 5f 74 2f\n8 output bytes:\t 01 02 6b bd e3 5f 74 2f\ndecoded:\t681413261095983\n\nI don't think there's many workloads where plain integers would skew extreme\nenough for it to work out to a loss often enough to matter. But:\n\n> I just don't really like making such assumptions on a system-wide basis (as\n> opposed to a per-datatype basis where it's easier to reason about the\n> consequences).\n\nI'd not at all be opposed to datatypes having influence over the on-disk\nencoding. I was just musing about a default heuristic that could make sense. I\ndo think you'd want something that chooses the encoding for one pg_attribute\nvalues based on preceding columns.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Aug 2021 10:41:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-03 14:26:16 -0400, Robert Haas wrote:\n> [ resurrecting this 2-year-old thread ]\n>\n> On Fri, Dec 13, 2019 at 12:45 AM Andres Freund <andres@anarazel.de> wrote:\n> > > If baking a new variant integer format now, I think limiting it to 64 bits\n> > > is probably a mistake given how long-lived PostgreSQL is, and how hard it\n> > > can be to change things in the protocol, on disk, etc.\n> >\n> > I don't think it's ever going to be sensible to transport 64bit quanta\n> > of data. Also, uh, it'd be larger than the data a postgres instance\n> > could really contain, given LSNs are 64 bit.\n>\n> We already use 128-bit integers in some places in the source code, and\n> it seems more likely than not that use of 128-bit integer data types\n> will continue to expand over time. If we want to represent 64 bit\n> integers in a variable number of bits today, it does seem pretty\n> likely that some day we will want to do the same thing with 128 bit\n> integers, and maybe eventually larger. And I think that all of that is\n> true even though exhausting a 64-bit LSN space looks difficult today\n> and will likely remain difficult for the foreseeable future.\n\nI was thinking a bit about how to encode arbitrary length values for the cases\nwhere that's interesting.\n\nCurrently what I proposed for 8 byte unsigned integers is that we encode the\nlength in unary unset bits, followed by a set bit. As a special case, a prefix\nof 8 bits indicates a length of 9, without needing the separator bit - we\ndon't need a separator bit at that point.\n\nExtending that to arbitrary lengths obviously at some point makes the encoding\nin unary wasteful, and the benefit of few branches vanishes. So what I was\nthinking is that for variable length pieces of data that are not limited to 8\nbytes, we could replace the '8 0 bits' special case with a new special case:\nThe length in bytes follows as a max-8-byte varint.\n\nThat'd leave us with the following overheads:\n- 0 - 127: 0 bytes (i.e. 0 to 7 bits)\n- 128 - 2^56 - 1: 1 byte (i.e. 7 bits - 7 bytes)\n- 7 bytes - 127 bytes: 2 bytes\n- 128 bytes - 16383 bytes: 3 bytes\n- 16384 bytes - 2097151 bytes: 4 bytes\n- 2097152 bytes - 268435455 bytes: 5 bytes\n- 268435456 bytes - 34359738367 bytes: 6 bytes\n- ...\n\nThe obvious alternative would of course be to just always store the length\nprefix separately:\n- 0 - 127 bytes: 1 byte\n- 128 - 16383 bytes: 2 bytes\n- 16384 - 2097151 bytes: 3 bytes\n- ...\n\nI do suspect that for a fair number of cases the \"0 byte overhead\" for very\nsmall values would be worth the increase in overhead later. Particularly\nbecause the decoding for values up to 7 bytes would be cheaper cpu-wise as\nwell.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Aug 2021 12:01:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 3:01 PM Andres Freund <andres@anarazel.de> wrote:\n> Extending that to arbitrary lengths obviously at some point makes the encoding\n> in unary wasteful, and the benefit of few branches vanishes. So what I was\n> thinking is that for variable length pieces of data that are not limited to 8\n> bytes, we could replace the '8 0 bits' special case with a new special case:\n> The length in bytes follows as a max-8-byte varint.\n\nBut what if I have a machine with more than 16 exabytes of RAM and I\nwant to use all of its memory to store one really big integer?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Aug 2021 15:37:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "On 2021-08-04 15:37:36 -0400, Robert Haas wrote:\n> On Wed, Aug 4, 2021 at 3:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > Extending that to arbitrary lengths obviously at some point makes the encoding\n> > in unary wasteful, and the benefit of few branches vanishes. So what I was\n> > thinking is that for variable length pieces of data that are not limited to 8\n> > bytes, we could replace the '8 0 bits' special case with a new special case:\n> > The length in bytes follows as a max-8-byte varint.\n> \n> But what if I have a machine with more than 16 exabytes of RAM and I\n> want to use all of its memory to store one really big integer?\n\nThen the embedded 8 byte length value would just have to do the same thing\nrecursively to store that huge length header :)\n\n\n",
"msg_date": "Wed, 4 Aug 2021 12:45:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 3:46 PM Andres Freund <andres@anarazel.de> wrote:\n> > But what if I have a machine with more than 16 exabytes of RAM and I\n> > want to use all of its memory to store one really big integer?\n>\n> Then the embedded 8 byte length value would just have to do the same thing\n> recursively to store that huge length header :)\n\nWell, yes. But more seriously, my point is that I can't imagine why we\nwould need an object with a length bounded by 2^64. I mean I suppose\nthere's no harm in looking to the future, but that's *really big*.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Aug 2021 16:38:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "\n\nOn 8/4/21 9:01 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-08-03 14:26:16 -0400, Robert Haas wrote:\n>> [ resurrecting this 2-year-old thread ]\n>>\n>> On Fri, Dec 13, 2019 at 12:45 AM Andres Freund <andres@anarazel.de> wrote:\n>>>> If baking a new variant integer format now, I think limiting it to 64 bits\n>>>> is probably a mistake given how long-lived PostgreSQL is, and how hard it\n>>>> can be to change things in the protocol, on disk, etc.\n>>>\n>>> I don't think it's ever going to be sensible to transport 64bit quanta\n>>> of data. Also, uh, it'd be larger than the data a postgres instance\n>>> could really contain, given LSNs are 64 bit.\n>>\n>> We already use 128-bit integers in some places in the source code, and\n>> it seems more likely than not that use of 128-bit integer data types\n>> will continue to expand over time. If we want to represent 64 bit\n>> integers in a variable number of bits today, it does seem pretty\n>> likely that some day we will want to do the same thing with 128 bit\n>> integers, and maybe eventually larger. And I think that all of that is\n>> true even though exhausting a 64-bit LSN space looks difficult today\n>> and will likely remain difficult for the foreseeable future.\n> \n> I was thinking a bit about how to encode arbitrary length values for the cases\n> where that's interesting.\n> \n> Currently what I proposed for 8 byte unsigned integers is that we encode the\n> length in unary unset bits, followed by a set bit. As a special case, a prefix\n> of 8 bits indicates a length of 9, without needing the separator bit - we\n> don't need a separator bit at that point.\n> \n\nHow is that better than the two varint flavors that are already out \nthere, i.e. the bitcoin [1] and protocol buffers [2]?\n\nThe first one seems quite efficient in how it encodes the length into \nvery few bits (which matters especially for small values). It's designed \nfor integers with 1B, 2B, 4B or 8B, but it can be extended to arbitrary \nlengths fairly easily, I think:\n\nLook at the first byte, and\n\n0 - 243 - encoded as is\n244 - 1 byte\n245 - 2 bytes\n246 - 3 bytes\n247 - 4 bytes\n248 - 5 bytes\n249 - 6 bytes\n250 - 7 bytes\n251 - 8 bytes\n252 - next 1 byte is length\n253 - next 2 bytes are length\n254 - next 3 bytes are length\n255 - next 4 bytes are length\n\nIf we want to support longer lengths, we'd have to reserve an extra \nvalue (which reduces the number of values that require a single byte).\n\nThe [2] is a bit more problematic, as it's tailored for very short \nvalues (essentially wasting 1/8 of bits) and you have to parse the whole \nvalue to determine the length.\n\n[1] https://wiki.bitcoinsv.io/index.php/VarInt\n[2] https://developers.google.com/protocol-buffers/docs/encoding\n\n> Extending that to arbitrary lengths obviously at some point makes the encoding\n> in unary wasteful, and the benefit of few branches vanishes. So what I was\n> thinking is that for variable length pieces of data that are not limited to 8\n> bytes, we could replace the '8 0 bits' special case with a new special case:\n> The length in bytes follows as a max-8-byte varint.\n> \n> That'd leave us with the following overheads:\n> - 0 - 127: 0 bytes (i.e. 0 to 7 bits)\n> - 128 - 2^56 - 1: 1 byte (i.e. 7 bits - 7 bytes)\n> - 7 bytes - 127 bytes: 2 bytes\n> - 128 bytes - 16383 bytes: 3 bytes\n> - 16384 bytes - 2097151 bytes: 4 bytes\n> - 2097152 bytes - 268435455 bytes: 5 bytes\n> - 268435456 bytes - 34359738367 bytes: 6 bytes\n> - ...\n> \n> The obvious alternative would of course be to just always store the length\n> prefix separately:\n> - 0 - 127 bytes: 1 byte\n> - 128 - 16383 bytes: 2 bytes\n> - 16384 - 2097151 bytes: 3 bytes\n> - ...\n> \n> I do suspect that for a fair number of cases the \"0 byte overhead\" for very\n> small values would be worth the increase in overhead later. Particularly\n> because the decoding for values up to 7 bytes would be cheaper cpu-wise as\n> well.\n> \n\nYeah. Especially if the long values can be compressed, which probably \napplies to most real-world data sets. IMHO the efficiency for short \nvalues is the more important thing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 4 Aug 2021 23:44:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-04 23:44:10 +0200, Tomas Vondra wrote:\n> How is that better than the two varint flavors that are already out there,\n> i.e. the bitcoin [1] and protocol buffers [2]?\n\nThe protobuf one is *terrible* for CPU efficiency. You need to go through each\nbyte, do masking and shifting for each byte and then have a conditional\nbranch. That's bad from the the amount of instructions you need to execute,\nand *really* bad for the branch predictor.\n\n\n> The first one seems quite efficient in how it encodes the length into very\n> few bits (which matters especially for small values). It's designed for\n> integers with 1B, 2B, 4B or 8B, but it can be extended to arbitrary lengths\n> fairly easily, I think:\n\n> Look at the first byte, and\n> \n> 0 - 243 - encoded as is\n> 244 - 1 byte\n> 245 - 2 bytes\n> 246 - 3 bytes\n> 247 - 4 bytes\n> 248 - 5 bytes\n> 249 - 6 bytes\n> 250 - 7 bytes\n> 251 - 8 bytes\n> 252 - next 1 byte is length\n> 253 - next 2 bytes are length\n> 254 - next 3 bytes are length\n> 255 - next 4 bytes are length\n\n> If we want to support longer lengths, we'd have to reserve an extra value\n> (which reduces the number of values that require a single byte).\n\nI think that's not a bad scheme. I think it may end up being a bit more\nexpensive to decode because you need more branches instead of using\nfind-first-set type instructions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Aug 2021 16:05:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "On 8/5/21 1:05 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-08-04 23:44:10 +0200, Tomas Vondra wrote:\n>> How is that better than the two varint flavors that are already out there,\n>> i.e. the bitcoin [1] and protocol buffers [2]?\n> \n> The protobuf one is *terrible* for CPU efficiency. You need to go through each\n> byte, do masking and shifting for each byte and then have a conditional\n> branch. That's bad from the the amount of instructions you need to execute,\n> and *really* bad for the branch predictor.\n> \n\nYeah, probably true - particularly for longer values. No argument here.\n\n> \n>> The first one seems quite efficient in how it encodes the length into very\n>> few bits (which matters especially for small values). It's designed for\n>> integers with 1B, 2B, 4B or 8B, but it can be extended to arbitrary lengths\n>> fairly easily, I think:\n> \n>> Look at the first byte, and\n>>\n>> 0 - 243 - encoded as is\n>> 244 - 1 byte\n>> 245 - 2 bytes\n>> 246 - 3 bytes\n>> 247 - 4 bytes\n>> 248 - 5 bytes\n>> 249 - 6 bytes\n>> 250 - 7 bytes\n>> 251 - 8 bytes\n>> 252 - next 1 byte is length\n>> 253 - next 2 bytes are length\n>> 254 - next 3 bytes are length\n>> 255 - next 4 bytes are length\n> \n>> If we want to support longer lengths, we'd have to reserve an extra value\n>> (which reduces the number of values that require a single byte).\n> \n> I think that's not a bad scheme. I think it may end up being a bit more\n> expensive to decode because you need more branches instead of using\n> find-first-set type instructions.\n> \n\nI don't think it requires many branches, because you can essentially do\n\n if (byte[0] <= 243)\n length = 0\n else if (byte[0] <= 251)\n length = byte[0] - 243\n else\n {\n length_bytes = byte[0] - 251;\n ... read length_bytes, decode length\n }\n\nbut I haven't tried implementing it and maybe my intuition is wrong.\n\nOr maybe it'd be a good scheme for on-disk format, but poor for memory.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Aug 2021 01:21:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "\nOn 8/4/21 7:21 PM, Tomas Vondra wrote:\n> On 8/5/21 1:05 AM, Andres Freund wrote:\n>\n>>\n>>> The first one seems quite efficient in how it encodes the length\n>>> into very\n>>> few bits (which matters especially for small values). It's designed for\n>>> integers with 1B, 2B, 4B or 8B, but it can be extended to arbitrary\n>>> lengths\n>>> fairly easily, I think:\n>>\n>>> Look at the first byte, and\n>>>\n>>> 0 - 243 - encoded as is\n>>> 244 - 1 byte\n>>> 245 - 2 bytes\n>>> 246 - 3 bytes\n>>> 247 - 4 bytes\n>>> 248 - 5 bytes\n>>> 249 - 6 bytes\n>>> 250 - 7 bytes\n>>> 251 - 8 bytes\n>>> 252 - next 1 byte is length\n>>> 253 - next 2 bytes are length\n>>> 254 - next 3 bytes are length\n>>> 255 - next 4 bytes are length\n>>\n>>> If we want to support longer lengths, we'd have to reserve an extra\n>>> value\n>>> (which reduces the number of values that require a single byte).\n>>\n>> I think that's not a bad scheme. I think it may end up being a bit more\n>> expensive to decode because you need more branches instead of using\n>> find-first-set type instructions.\n>>\n>\n> I don't think it requires many branches, because you can essentially do\n>\n> if (byte[0] <= 243)\n> length = 0\n> else if (byte[0] <= 251)\n> length = byte[0] - 243\n> else\n> {\n> length_bytes = byte[0] - 251;\n> ... read length_bytes, decode length\n> }\n>\n> but I haven't tried implementing it and maybe my intuition is wrong.\n>\n> Or maybe it'd be a good scheme for on-disk format, but poor for memory.\n>\n>\n>\n\nThis seems like quite an elegant scheme. Certainly worth trying out. I\nfind it hard to believe that more than 4 length bytes would be needed\n(although that reminds me of the famous and possibly apocryphal quote\n\"640K ought to be enough for anybody.\")\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 5 Aug 2021 09:18:11 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "Andres asked me off-list if I could take another look at this.\n\nSo here's a bit of review:\n\n- The header comment at the top of the file gives some examples of how\nthe encoding works, and then basically says, oh wait, there's also a\nsign bit at the end, so all those examples are actually wrong. It's\nreally this other thing. Perhaps it's possible to reword things to\navoid that.\n\n- The XXX comment in the file header says \"variants\" where it probably\nmeans \"varints\". A later comment mentions \"lenght\" instead of\n\"length\".\n\n- In pg_varint_encode_uint64, where it says \"XXX: I'm sure there's a\nneater way to do this,\" I'm not sure exactly what your parameters for\nneater are, but you could avoid the conditional if you just did:\n\nbits_of_output_data = bits_of_input_data + bytes_of_input_data;\nbytes_of_output_data = (bits_of_output_data + BITS_PER_BYTE - 1) /\nBITS_PER_BYTE;\n\nI think the comment implies that you thought of that and discarded it\non performance grounds, but I'm not quite sure that I'm right about\nthat, so I mention it here.\n\nI also thought of another approach that doesn't require computing\nbytes_of_input_data:\n\nbytes_of_output_data = bits_of_input_data / 7 + 1;\n\nThe intuition here is that every byte you add to the output gives you\nroom for 7 additional data bits, because you gain 8 for the new byte\nbut also have to consume 1 bit from the first byte, for a net gain of\n7. This formula gives the wrong answer when bits_of_input_data is 63\nor 64, but you don't need to use it for values greater than\nPG_VARINT_UINT64_MAX_8BYTE_VAL, so that doesn't matter.\n\n- It's a bit surprising that the length argument to memcpy() is a\nconstant in the 8-bytes-or-less case. It should be fine, because the\noutput buffer must be big enough to hold at least 9 more bytes, so all\nthat can happen is we write unnecessary bytes that the caller can\nlater overwrite, or not. But it would be worth a comment, I think. In\nparticular, we should note that you should always have enough\nallocated space in the output buffer for the maximum width of 9 bytes\neven if you know the number you're actually encoding is small. You do\nmention this in the decoding function, but not on the encoding side.\n\n- The FIXME comment for pg_varint_decode_uint64 no verb.\n\n- ret <<= BITS_PER_BYTE * (BITS_PER_BYTE - bytes) is awfully hard to\nreason about. My first thought was that it had to be wrong: if we are\nshifting left during encoding, how can we also be shifting left during\ndecoding? Actually I think I see, now, how it works on a little-Endian\nmachine. Logically, the encoded bytes are the least-significant bytes.\nWe copy them to the least-significant bytes of \"ret,\" but they're in\nthe wrong order, with the most significant byte of the encoded\nrepresentation in the last significant byte of \"ret\". By shifting\nleft, we move all the bytes to the other \"end\" of \"ret\", and then the\nbyte swap puts them back where they were, but now in the right order.\nAlong the way, the garbage bits we're supposed to be ignoring get\nthrown away and replaced with zeroes. But I still don't see how it\nworks on a big-Endian machine. On such a machine, we copy the encoded\nbytes, which are still the least significant bytes of the original\nvalue, to the most significant bytes of \"ret\". The byte swap isn't\ngoing to do anything here, so in this case it feels like the shift\nneeds to be in the other direction -- a shift right. But maybe not,\nbecause originally I thought it should be a shift right in both cases,\nand now I don't think that's right.\n\n- Re \"mask out length indicator bits & separator bit\", only the\nseparator bit actually is being or needs to be masked out.\n\n- I think the overall interface is fine. I think it might be useful to\nadd a function that returns the length to which something would encode\nwithout actually encoding it, for cases where you want to estimate how\nmuch space you're going to need for something so that you can allocate\nspace for it, and then only afterwards do the encoding for real.\n\nThat's all I've got.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Jan 2023 18:36:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
},
{
"msg_contents": "On Thu, Jan 05, 2023 at 06:36:15PM -0500, Robert Haas wrote:\n> Andres asked me off-list if I could take another look at this.\n\nI'm curious whether there are plans to pick this up again. IMHO it seems\nlike a generally good idea. AFAICT the newest version of the patch is in a\nseparate thread [0], which I just wanted to link here for posterity. If\nthere are no plans, I might give it a try, but otherwise I'm happy to help\nreview.\n\n[0] https://postgr.es/m/20221004234952.anrguppx5owewb6n%40awork3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 24 Apr 2024 15:56:49 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A varint implementation for PG?"
}
] |
[
{
"msg_contents": "While testing, my colleague Vignesh has hit an assert in\nTransactionGroupUpdateXidStatus. But that is not reproducible. After\nsome analysis and code review, I have found the reason for the same.\n\nAs shown in the below code, there is an assert in\nTransactionGroupUpdateXidStatus, which assumes that an overflowed\ntransaction can never get registered for the group update. But,\nactually, that is not true because while registering the transaction\nfor group update, we only check how many committed children this\ntransaction has because all aborted sub-transaction would have already\nupdated their status. So if the transaction once overflowed but later\nall its children are aborted (i.e remaining committed children are <=\nTHRESHOLD_SUBTRANS_CLOG_OPT) then it will be registered for the group\nupdate.\n\n/*\n* Overflowed transactions should not use group XID status update\n* mechanism.\n*/\nAssert(!pgxact->overflowed);\n\nA solution could be either we remove this assert or change this assert\nto Assert(pgxact->nxids <= THRESHOLD_SUBTRANS_CLOG_OPT);\n\nNote: I could not come up with the reproducible test case as we can\nnot ensure whether a backend will try to group updates or not because\nthat depends upon whether it gets the CLogControlLock or not.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 10 Dec 2019 13:55:40 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "Hi,\n\nAmit, Robert, IIRC that's mostly your feature?\n\nOn 2019-12-10 13:55:40 +0530, Dilip Kumar wrote:\n> While testing, my colleague Vignesh has hit an assert in\n> TransactionGroupUpdateXidStatus. But that is not reproducible. After\n> some analysis and code review, I have found the reason for the same.\n> \n> As shown in the below code, there is an assert in\n> TransactionGroupUpdateXidStatus, which assumes that an overflowed\n> transaction can never get registered for the group update. But,\n> actually, that is not true because while registering the transaction\n> for group update, we only check how many committed children this\n> transaction has because all aborted sub-transaction would have already\n> updated their status. So if the transaction once overflowed but later\n> all its children are aborted (i.e remaining committed children are <=\n> THRESHOLD_SUBTRANS_CLOG_OPT) then it will be registered for the group\n> update.\n\n> /*\n> * Overflowed transactions should not use group XID status update\n> * mechanism.\n> */\n> Assert(!pgxact->overflowed);\n> \n> A solution could be either we remove this assert or change this assert\n> to Assert(pgxact->nxids <= THRESHOLD_SUBTRANS_CLOG_OPT);\n\nMaybe I'm missing something, but isn't this a bug then? IIRC We can't\nrely on MyProc->subxids once we overflowed, even if since then the\nremaining number of children has become low enough. It seems to me that\nthe actual fix here is to correct the condition in\nTransactionIdSetPageStatus() checking whether group updates are possible\n- it seems it'd need to verify that the transaction isn't\noverflowed.\n\n\nAlso, it's somewhat odd that TransactionIdSetPageStatus() first has\n\n\t/* Can't use group update when PGPROC overflows. */\n\tStaticAssertStmt(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS,\n\t\t\t\t\t \"group clog threshold less than PGPROC cached subxids\");\n\nand then, within an if():\n\n\t\t/*\n\t\t * We don't try to do group update optimization if a process has\n\t\t * overflowed the subxids array in its PGPROC, since in that case we\n\t\t * don't have a complete list of XIDs for it.\n\t\t */\n\t\tAssert(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS);\n\nEven if these weren't redundant, it can't make sense to test such a\nstatic condition only within an if? Is it possible this was actually\nintended to test something different?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Dec 2019 14:32:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 4:02 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> Amit, Robert, IIRC that's mostly your feature?\n>\n\nI will look into this today.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Dec 2019 08:41:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 4:02 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-12-10 13:55:40 +0530, Dilip Kumar wrote:\n>\n> > /*\n> > * Overflowed transactions should not use group XID status update\n> > * mechanism.\n> > */\n> > Assert(!pgxact->overflowed);\n> >\n> > A solution could be either we remove this assert or change this assert\n> > to Assert(pgxact->nxids <= THRESHOLD_SUBTRANS_CLOG_OPT);\n>\n> Maybe I'm missing something, but isn't this a bug then? IIRC We can't\n> rely on MyProc->subxids once we overflowed, even if since then the\n> remaining number of children has become low enough.\n>\n\nAFAICS, the MyProc->subxids is maintained properly if the number of\nsubtransactions is lesser than PGPROC_MAX_CACHED_SUBXIDS (64). Can\nyou explain the case where that won't be true? Also, even if what you\nare saying is true, I think the memcmp in TransactionIdSetPageStatus\nshould avoid taking us a wrong decision.\n\n\n>\n> Also, it's somewhat odd that TransactionIdSetPageStatus() first has\n>\n> /* Can't use group update when PGPROC overflows. */\n> StaticAssertStmt(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS,\n> \"group clog threshold less than PGPROC cached subxids\");\n>\n> and then, within an if():\n>\n> /*\n> * We don't try to do group update optimization if a process has\n> * overflowed the subxids array in its PGPROC, since in that case we\n> * don't have a complete list of XIDs for it.\n> */\n> Assert(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS);\n>\n> Even if these weren't redundant, it can't make sense to test such a\n> static condition only within an if?\n>\n\nI don't remember exactly the reason for this, but now I don't find the\nAssert within if () meaningful. I think we should remove the Assert\ninside if() unless Robert or someone see any use of it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Dec 2019 11:00:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 11, 2019 at 4:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-12-10 13:55:40 +0530, Dilip Kumar wrote:\n> >\n> > > /*\n> > > * Overflowed transactions should not use group XID status update\n> > > * mechanism.\n> > > */\n> > > Assert(!pgxact->overflowed);\n> > >\n> > > A solution could be either we remove this assert or change this assert\n> > > to Assert(pgxact->nxids <= THRESHOLD_SUBTRANS_CLOG_OPT);\n> >\n> > Maybe I'm missing something, but isn't this a bug then? IIRC We can't\n> > rely on MyProc->subxids once we overflowed, even if since then the\n> > remaining number of children has become low enough.\n> >\n>\n> AFAICS, the MyProc->subxids is maintained properly if the number of\n> subtransactions is lesser than PGPROC_MAX_CACHED_SUBXIDS (64). Can\n> you explain the case where that won't be true? Also, even if what you\n> are saying is true, I think the memcmp in TransactionIdSetPageStatus\n> should avoid taking us a wrong decision.\n>\n\nI am able to reproduce the issue by reducing the values of\nPGPROC_MAX_CACHED_SUBXIDS and THRESHOLD_SUBTRANS_CLOG_OPT to 2. Below\nis what I did after reducing the values:\nSession-1\n--------------\npostgres=# begin;\nBEGIN\npostgres=# insert into t1 values(1);\nINSERT 0 1\npostgres=# savepoint s1;\nSAVEPOINT\npostgres=# insert into t1 values(2);\nINSERT 0 1\npostgres=# savepoint s2;\nSAVEPOINT\npostgres=# insert into t1 values(3);\nINSERT 0 1\npostgres=# savepoint s3;\nSAVEPOINT\npostgres=# insert into t1 values(4);\nINSERT 0 1\npostgres=# rollback to s2;\nROLLBACK\n\nSession-2\n---------------\ninsert into t1 values(4); -- attach debugger and stop in\nTransactionIdSetPageStatus after acquiring CLogControlLock\n\nSession-1\n---------------\nCommit; -- This will wait to acquire CLogControlLock in a group\nupdate path (TransactionGroupUpdateXidStatus). Now, continue in the\nsession-2 debugger. After that continue in session-1's debugger and\nit will hit the Assert.\n\nThe attached patch fixes it by changing the Assert. I have\nadditionally removed the redundant Assert in\nTransactionIdSetPageStatus as pointed out by Andres. I am planning to\ncommit and backpatch this early next week (Monday) unless someone\nwants to review it further or has objections to it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Dec 2019 12:13:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On 2019-Dec-11, Amit Kapila wrote:\n\n> On Wed, Dec 11, 2019 at 4:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-12-10 13:55:40 +0530, Dilip Kumar wrote:\n\n> > /*\n> > * We don't try to do group update optimization if a process has\n> > * overflowed the subxids array in its PGPROC, since in that case we\n> > * don't have a complete list of XIDs for it.\n> > */\n> > Assert(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS);\n> >\n> > Even if these weren't redundant, it can't make sense to test such a\n> > static condition only within an if?\n> \n> I don't remember exactly the reason for this, but now I don't find the\n> Assert within if () meaningful. I think we should remove the Assert\n> inside if() unless Robert or someone see any use of it.\n\nThe more I look at both these asserts, the less sense they make. Why\ndoes clog.c care about PGPROC at all? Looking at the callers of that\nroutine, nowhere do they concern themselves with whether the overflowed\nflag has been set or not. It seems to me that the StaticAssert() should\nbe near the PGPROC_MAX_CACHED_SUBXIDS definition, not the SUBTRANS\ndefinition (maybe as StaticAssertDecl, as in\n201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217 )\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Dec 2019 09:40:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 6:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Dec-11, Amit Kapila wrote:\n>\n> > On Wed, Dec 11, 2019 at 4:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2019-12-10 13:55:40 +0530, Dilip Kumar wrote:\n>\n> > > /*\n> > > * We don't try to do group update optimization if a process has\n> > > * overflowed the subxids array in its PGPROC, since in that case we\n> > > * don't have a complete list of XIDs for it.\n> > > */\n> > > Assert(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS);\n> > >\n> > > Even if these weren't redundant, it can't make sense to test such a\n> > > static condition only within an if?\n> >\n> > I don't remember exactly the reason for this, but now I don't find the\n> > Assert within if () meaningful. I think we should remove the Assert\n> > inside if() unless Robert or someone see any use of it.\n>\n> The more I look at both these asserts, the less sense they make. Why\n> does clog.c care about PGPROC at all?\n>\n\nIt is mainly for group updates. Basically, we want to piggyback the\nprocs that are trying to update clog at the same time on the proc\nwhich got the CLogControlLock. This avoids taking/releasing that lock\nmultiple times. See TransactionGroupUpdateXidStatus.\n\n> Looking at the callers of that\n> routine, nowhere do they concern themselves with whether the overflowed\n> flag has been set or not. It seems to me that the StaticAssert() should\n> be near the PGPROC_MAX_CACHED_SUBXIDS definition, not the SUBTRANS\n> definition (maybe as StaticAssertDecl, as in\n> 201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217 )\n>\n\nSounds reasonable. We can do that once the patch mentioned by you got\ncommitted. For now, we are planning to just remove the Assert inside\nif() condition. Do you see any problem with that?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Dec 2019 18:55:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On 2019-Dec-12, Amit Kapila wrote:\n\n> On Thu, Dec 12, 2019 at 6:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > The more I look at both these asserts, the less sense they make. Why\n> > does clog.c care about PGPROC at all?\n> \n> It is mainly for group updates. Basically, we want to piggyback the\n> procs that are trying to update clog at the same time on the proc\n> which got the CLogControlLock. This avoids taking/releasing that lock\n> multiple times. See TransactionGroupUpdateXidStatus.\n\nYeah, I (think I) understand that. My point is that conceptually, the\nfact that a PGPROC has overflowed does not really affect clog.c in any\nway.\n\n> > Looking at the callers of that routine, nowhere do they concern\n> > themselves with whether the overflowed\n> > flag has been set or not. It seems to me that the StaticAssert() should\n> > be near the PGPROC_MAX_CACHED_SUBXIDS definition, not the SUBTRANS\n> > definition (maybe as StaticAssertDecl, as in\n> > 201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217 )\n> \n> Sounds reasonable. We can do that once the patch mentioned by you got\n> committed. For now, we are planning to just remove the Assert inside\n> if() condition. Do you see any problem with that?\n\nNope.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Dec 2019 12:09:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n> and then, within an if():\n>\n> /*\n> * We don't try to do group update optimization if a process has\n> * overflowed the subxids array in its PGPROC, since in that case we\n> * don't have a complete list of XIDs for it.\n> */\n> Assert(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS);\n>\n> Even if these weren't redundant, it can't make sense to test such a\n> static condition only within an if? Is it possible this was actually\n> intended to test something different?\n\nBased on the comment, I imagine it might've been intended to read\nAssert(nsubxids <= PGPROC_MAX_CACHED_SUBXIDS).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Dec 2019 09:14:03 -0600",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 8:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Dec 10, 2019 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n> > and then, within an if():\n> >\n> > /*\n> > * We don't try to do group update optimization if a process has\n> > * overflowed the subxids array in its PGPROC, since in that case we\n> > * don't have a complete list of XIDs for it.\n> > */\n> > Assert(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS);\n> >\n> > Even if these weren't redundant, it can't make sense to test such a\n> > static condition only within an if? Is it possible this was actually\n> > intended to test something different?\n>\n> Based on the comment, I imagine it might've been intended to read\n> Assert(nsubxids <= PGPROC_MAX_CACHED_SUBXIDS).\n>\n\nDo you think we need such an Assert after having StaticAssert for\n(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS) and then\nan if statement containing (nsubxids <= THRESHOLD_SUBTRANS_CLOG_OPT)\njust before this Assert? Sure, we can keep this for extra safety, but\nI don't see the need for it.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Dec 2019 07:53:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 9:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Do you think we need such an Assert after having StaticAssert for\n> (THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS) and then\n> an if statement containing (nsubxids <= THRESHOLD_SUBTRANS_CLOG_OPT)\n> just before this Assert? Sure, we can keep this for extra safety, but\n> I don't see the need for it.\n\nI don't have strong feelings about it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 14 Dec 2019 22:21:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Sun, Dec 15, 2019 at 8:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Dec 12, 2019 at 9:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do you think we need such an Assert after having StaticAssert for\n> > (THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS) and then\n> > an if statement containing (nsubxids <= THRESHOLD_SUBTRANS_CLOG_OPT)\n> > just before this Assert? Sure, we can keep this for extra safety, but\n> > I don't see the need for it.\n>\n> I don't have strong feelings about it.\n>\n\nOkay, in that case, I am planning to push this patch [1] tomorrow\nmorning unless I see any other comments. I am also planning to\nbackpatch this through 10 where it got introduced, even though this is\nnot a serious bug, but I think it is better to keep the code\nconsistent in back branches.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JZ5EipQ8Ta6eLMX_ni3CNtZDUrvHg0th1C8n%3D%2Bk%2B0ojg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Dec 2019 08:53:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 8:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Dec 15, 2019 at 8:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Dec 12, 2019 at 9:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Do you think we need such an Assert after having StaticAssert for\n> > > (THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS) and then\n> > > an if statement containing (nsubxids <= THRESHOLD_SUBTRANS_CLOG_OPT)\n> > > just before this Assert? Sure, we can keep this for extra safety, but\n> > > I don't see the need for it.\n> >\n> > I don't have strong feelings about it.\n> >\n>\n> Okay, in that case, I am planning to push this patch [1] tomorrow\n> morning unless I see any other comments. I am also planning to\n> backpatch this through 10 where it got introduced,\n>\n\nThis was introduced in 11, so pushed and backpatched through 11.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Dec 2019 10:13:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong assert in TransactionGroupUpdateXidStatus"
}
] |
[
{
"msg_contents": "Start Walreceiver completely before shut down it on standby server.\n\nThe walreceiver will be shut down, when read an invalid record in the\nWAL streaming from master.And then, we retry from archive/pg_wal again.\n\nAfter that, we start walreceiver in RequestXLogStreaming(), and read\nrecord from the WAL streaming. But before walreceiver starts, we read\ndata from file which be streamed over and present in pg_wal by last\ntime, because of walrcv->receivedUpto > RecPtr and the wal is actually\nflush on disk. Now, we read the invalid record again, what the next to\ndo? Shut down the walreceiver and do it again.\n\nSo, we always read the invalid record, starting the walreceiver and make\nit down before it starts completely.\n\nThis code fix it by set the walrcv->receivedUpto to the starting point,\nwe can read nothing before the walreceiver starts and streaming.",
"msg_date": "Tue, 10 Dec 2019 17:52:29 +0800",
"msg_from": "jiankang liu <liujk1994@gmail.com>",
"msg_from_op": true,
"msg_subject": "Start Walreceiver completely before shut down it on standby server."
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 3:06 AM jiankang liu <liujk1994@gmail.com> wrote:\n\n> Start Walreceiver completely before shut down it on standby server.\n>\n> The walreceiver will be shut down, when read an invalid record in the\n> WAL streaming from master.And then, we retry from archive/pg_wal again.\n>\n> After that, we start walreceiver in RequestXLogStreaming(), and read\n> record from the WAL streaming. But before walreceiver starts, we read\n> data from file which be streamed over and present in pg_wal by last\n> time, because of walrcv->receivedUpto > RecPtr and the wal is actually\n> flush on disk. Now, we read the invalid record again, what the next to\n> do? Shut down the walreceiver and do it again.\n>\n\nI am missing something here, if walrcv->receivedUpto > RecPtr, why are we\ngetting / reading invalid record?\n\nOn Tue, Dec 10, 2019 at 3:06 AM jiankang liu <liujk1994@gmail.com> wrote:Start Walreceiver completely before shut down it on standby server.The walreceiver will be shut down, when read an invalid record in theWAL streaming from master.And then, we retry from archive/pg_wal again.After that, we start walreceiver in RequestXLogStreaming(), and readrecord from the WAL streaming. But before walreceiver starts, we readdata from file which be streamed over and present in pg_wal by lasttime, because of walrcv->receivedUpto > RecPtr and the wal is actuallyflush on disk. Now, we read the invalid record again, what the next todo? Shut down the walreceiver and do it again.I am missing something here, if walrcv->receivedUpto > RecPtr, why are we getting / reading invalid record?",
"msg_date": "Tue, 10 Dec 2019 10:40:53 -0800",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Start Walreceiver completely before shut down it on standby\n server."
},
{
"msg_contents": "At Tue, 10 Dec 2019 10:40:53 -0800, Ashwin Agrawal <aagrawal@pivotal.io> wrote in \n> On Tue, Dec 10, 2019 at 3:06 AM jiankang liu <liujk1994@gmail.com> wrote:\n> \n> > Start Walreceiver completely before shut down it on standby server.\n> >\n> > The walreceiver will be shut down, when read an invalid record in the\n> > WAL streaming from master.And then, we retry from archive/pg_wal again.\n> >\n> > After that, we start walreceiver in RequestXLogStreaming(), and read\n> > record from the WAL streaming. But before walreceiver starts, we read\n> > data from file which be streamed over and present in pg_wal by last\n> > time, because of walrcv->receivedUpto > RecPtr and the wal is actually\n> > flush on disk. Now, we read the invalid record again, what the next to\n> > do? Shut down the walreceiver and do it again.\n> >\n> \n> I am missing something here, if walrcv->receivedUpto > RecPtr, why are we\n> getting / reading invalid record?\n\nI bet on that the standby is connecting to a wrong master. For\nexample, something like happens when the master has been reinitalized\nfrom a backup and experienced another history, then the standby was\ninitialized from the reborn master but the stale archive files on the\nstandby are left alone.\n\nAnyway that cannot happen on correctly running replication set and\nwhat to do in the case is starting from a new basebackup of the\nmaster, making sure to erase stale archive files if any.\n\nAbout the proposed fix, it doesn't seem to cause start process to\nrewind WAL to that LSN. Even if that happens, it leads to no better\nthan a broken database.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 11 Dec 2019 14:37:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Start Walreceiver completely before shut down it on standby\n server."
},
{
"msg_contents": "I'm sorry I did not say it clearly.\n\nDuring my use of PG, I encountered such errors \"incorrect resource manager\ndata checksum in record at 0/5013730\",\nit will keeps printing this error message and never stops on standby\nserver,at same time, the walreceier process is lost.\nIn a few months, we encountered this situation twice, each time testing\nmore than 200 connections to read/write, after 2 or 3 days of continuous\noperation.\nMaybe disk problem, but also the line is operating system problems, leading\noff the data disk error occurred.\n\nIt will print this error never stops, and the walreceiver is lost, if we do\nnothing.\nJust restart standby server of PG, it only print this error message once,\nand then connect to master. Everthing is OK.\nWhy we neet to restart my server, it can not fixs that problem online? Why\nthe walreceiver is lost?\n\nThe record has been flushed to disk by the walreceiver, and the the startup\nprocess always read record and apply it. When it reads an invalid record,\nit will shut down the walreceiver by signal SIGTERM. Then, it will read\nfrom ARCHIVE/PG_WAL, just read files from pg_wal. Read an invalid record\nagain or read the file end(the read len is not equals XLOG_BLCKSZ), the\nstartup process will starts the walreceiver by RequestXLogStreaming() and\nswitch to read from XLOG_FROM_STREAM.\n\nIn RequestXLogStreaming(), set the walrcv->receiveStart = recptr, the\nwalreceiver will get the WAL from master start at walrcv->receiveStart. So\nwe can read the new data which streaming from master by the walreceiver\nthis time, instead of the wrong data on disk. It should not print the error\nmessage never stop and the walreceiver should not be lost after we read an\ninvalid record. But the fact is not work.\n\nWhat happened?\nThe previous step, the startup process starts the walreceiver, and switch\nto read from XLOG_FROM_STREAM. Then, check the walreceiver is active before\nwe read it, even the postmaster does not start the walreceiver, but the\nwalrcv->walRcvState == STARTING, we think the walreceiver is active, and\nready to read.\n\nNow, begin to read data if new data has arrived. How to check it?\nIf the Recptr, which is pointer we read, is lower than\nwalrcv->receivedUpto, we can read the data, even if the walreceiver does\nnot start completely and the data is OLD which has invalid reccord.\nRead it, and read an invalid reccord again, just stop the walreceiver\nagain(the walreceiver does not start completely, it has not pid, just set\nwalrcv->walRcvState = WALRCV_STOPPED). When the walreceiver starts, running\ninto WalReceiverMain(), check the walrcv->walRcvState == WALRCV_STOPPED,\nmyself has been shut down by others, just exit. So the walreceiver starts,\nexit again and again.\nThe startup process next to do is, starts the walreceiver, read data(read\nthe invalid record), shut down the walreceiver, also agiain and again.\n\nWhy restart standby server of PG will be OK?\nThe startup process begin to REDO, reads an invalid record, prints the\nerror message, and starts the walreceiver by RequestXLogStreaming() and\nswitch to read from XLOG_FROM_STREAM. This is first time to start the\nwalreceiver, set walrcv->receivedUpto = walrcv->receiveStart = recptr.\nThe startup process ready to read new data, but RecPtr >=\nwalrcv->receivedUpto, wait the walreceiver get WAL from master.\nSo that, we get the WAL from master instead of the WAL on disk, by restart\nstandby server of PG.\n\nBy my fix, ervery time we start the walreceiver, the startup process will\nwait for new data instead of read OLD data, Just like restart standby\nserver.\nSo, we can fix the problem online and the walreceiver will not be lost.\n\n\nKyotaro Horiguchi <horikyota.ntt@gmail.com> 于2019年12月11日周三 下午1:38写道:\n\n> At Tue, 10 Dec 2019 10:40:53 -0800, Ashwin Agrawal <aagrawal@pivotal.io>\n> wrote in\n> > On Tue, Dec 10, 2019 at 3:06 AM jiankang liu <liujk1994@gmail.com>\n> wrote:\n> >\n> > > Start Walreceiver completely before shut down it on standby server.\n> > >\n> > > The walreceiver will be shut down, when read an invalid record in the\n> > > WAL streaming from master.And then, we retry from archive/pg_wal again.\n> > >\n> > > After that, we start walreceiver in RequestXLogStreaming(), and read\n> > > record from the WAL streaming. But before walreceiver starts, we read\n> > > data from file which be streamed over and present in pg_wal by last\n> > > time, because of walrcv->receivedUpto > RecPtr and the wal is actually\n> > > flush on disk. Now, we read the invalid record again, what the next to\n> > > do? Shut down the walreceiver and do it again.\n> > >\n> >\n> > I am missing something here, if walrcv->receivedUpto > RecPtr, why are we\n> > getting / reading invalid record?\n>\n> I bet on that the standby is connecting to a wrong master. For\n> example, something like happens when the master has been reinitalized\n> from a backup and experienced another history, then the standby was\n> initialized from the reborn master but the stale archive files on the\n> standby are left alone.\n>\n> Anyway that cannot happen on correctly running replication set and\n> what to do in the case is starting from a new basebackup of the\n> master, making sure to erase stale archive files if any.\n>\n> About the proposed fix, it doesn't seem to cause start process to\n> rewind WAL to that LSN. Even if that happens, it leads to no better\n> than a broken database.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nI'm sorry I did not say it clearly.During my use of PG, I encountered such errors \"incorrect resource manager data checksum in record at 0/5013730\",it will keeps printing this error message and never stops \n\non standby server,at same time, the walreceier process is lost.In a few months, we encountered this situation twice, each time testing more than 200 connections to read/write, after 2 or 3 days of continuous operation.Maybe disk problem, but also the line is operating system problems, leading off the data disk error occurred. It will print this error never stops, and the walreceiver is lost, if we do nothing.Just restart standby server of PG, it only print this error message once, and then connect to master. Everthing is OK. Why we neet to restart my server, it can not fixs that problem online? Why the walreceiver is lost?The record has been flushed to disk by the walreceiver, and the the startup process always read record and apply it. When it reads an invalid record, it will shut down the walreceiver by signal SIGTERM. Then, it will read from ARCHIVE/PG_WAL, just read files from pg_wal. Read an invalid record again or read the file end(the read len is not equals XLOG_BLCKSZ), the startup process will starts the walreceiver by RequestXLogStreaming() and switch to read from XLOG_FROM_STREAM.In RequestXLogStreaming(), set the walrcv->receiveStart = recptr, the walreceiver will get the WAL from master start at \n\nwalrcv->receiveStart. So we can read the new data which streaming from master by the walreceiver this time, instead of the wrong data on disk. It should not print the error message never stop and the walreceiver should not be lost after we read an invalid record. But the fact is not work.What happened?The previous step, the startup process starts the walreceiver, and switch to read from XLOG_FROM_STREAM. Then, check the walreceiver is active before we read it, even the postmaster does not start the walreceiver, but the walrcv->walRcvState == STARTING, we think the walreceiver is active, and ready to read.Now, begin to read data if new data has arrived. How to check it?If the Recptr, which is pointer we read, is lower than walrcv->receivedUpto, we can read the data, even if the walreceiver does not start completely and the data is OLD which has invalid reccord.Read it, and read an invalid reccord again, just stop the walreceiver again(the walreceiver does not start completely, it has not pid, just set walrcv->walRcvState = WALRCV_STOPPED). When the walreceiver starts, running into WalReceiverMain(), check the walrcv->walRcvState == WALRCV_STOPPED, myself has been shut down by others, just exit. So the walreceiver starts, exit again and again.The startup process next to do is, starts the walreceiver, read data(read the invalid record), shut down the walreceiver, also agiain and again.Why restart standby server of PG will be OK?The startup process begin to REDO, reads an invalid record, prints the error message, and starts the walreceiver by RequestXLogStreaming() and switch to read from XLOG_FROM_STREAM. This is first time to start the walreceiver, set walrcv->receivedUpto = walrcv->receiveStart = recptr.The startup process ready to read new data, but RecPtr >= \n\nwalrcv->receivedUpto, wait the walreceiver get WAL from master.So that, we get the WAL from master instead of the WAL on disk, by restart standby server of PG.By my fix, ervery time we start the walreceiver, the startup process will wait for new data instead of read OLD data, Just like restart standby server.So, we can fix the problem online and the walreceiver will not be lost.Kyotaro Horiguchi <horikyota.ntt@gmail.com> 于2019年12月11日周三 下午1:38写道:At Tue, 10 Dec 2019 10:40:53 -0800, Ashwin Agrawal <aagrawal@pivotal.io> wrote in \n> On Tue, Dec 10, 2019 at 3:06 AM jiankang liu <liujk1994@gmail.com> wrote:\n> \n> > Start Walreceiver completely before shut down it on standby server.\n> >\n> > The walreceiver will be shut down, when read an invalid record in the\n> > WAL streaming from master.And then, we retry from archive/pg_wal again.\n> >\n> > After that, we start walreceiver in RequestXLogStreaming(), and read\n> > record from the WAL streaming. But before walreceiver starts, we read\n> > data from file which be streamed over and present in pg_wal by last\n> > time, because of walrcv->receivedUpto > RecPtr and the wal is actually\n> > flush on disk. Now, we read the invalid record again, what the next to\n> > do? Shut down the walreceiver and do it again.\n> >\n> \n> I am missing something here, if walrcv->receivedUpto > RecPtr, why are we\n> getting / reading invalid record?\n\nI bet on that the standby is connecting to a wrong master. For\nexample, something like happens when the master has been reinitalized\nfrom a backup and experienced another history, then the standby was\ninitialized from the reborn master but the stale archive files on the\nstandby are left alone.\n\nAnyway that cannot happen on correctly running replication set and\nwhat to do in the case is starting from a new basebackup of the\nmaster, making sure to erase stale archive files if any.\n\nAbout the proposed fix, it doesn't seem to cause start process to\nrewind WAL to that LSN. Even if that happens, it leads to no better\nthan a broken database.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 11 Dec 2019 16:06:26 +0800",
"msg_from": "jiankang liu <liujk1994@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Start Walreceiver completely before shut down it on standby\n server."
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 04:06:26PM +0800, jiankang liu wrote:\n> During my use of PG, I encountered such errors \"incorrect resource manager\n> data checksum in record at 0/5013730\",\n> it will keeps printing this error message and never stops on standby\n> server,at same time, the walreceier process is lost.\n> Maybe disk problem, but also the line is operating system problems, leading\n> off the data disk error occurred.\n\nAn incorrect resource manager checksum points to the checksum of a\nrecord to be busted, because the full record has been read from disk,\nbut it is failing a basic sanity check. That's basically a sign of\ncorruption. Do you have data checksums enabled? If your WAL records\nare broken, you have unfortunately good chances of having problems in\nother areas of your data folder :(\n\nAt this stage, it would be wiser to run diagnostics on your server,\nand be very careful with your chosen steps. Here are guidelines on\nthe wiki:\nhttps://wiki.postgresql.org/wiki/Corruption\n--\nMichael",
"msg_date": "Wed, 11 Dec 2019 20:13:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Start Walreceiver completely before shut down it on standby\n server."
}
] |
[
{
"msg_contents": "I recently became annoyed while working on patch A that I could not\nuse simplehash in shared memory, and then I became annoyed again while\nworking on patch B that I could not use simplehash in frontend code.\nSo here are a few patches for discussion.\n\nA significant problem in either case is that a simplehash wants to\nlive in a memory context; no such thing exists either for data in\nshared memory nor in frontend code. However, it seems to be quite easy\nto provide a way for simplehash to be defined so that it doesn't care\nabout memory contexts. See 0001.\n\nAs far as frontend code goes, the only other problem I found is that\nit makes use of elog() to signal some internal-ish messages. It seemed\nto me that the easiest thing to do was, if FRONTEND is defined, use\npg_log_error(...) instead of elog(ERROR, ...). For the one instance of\nelog(LOG, ...) in simplehash.h, I chose to use pg_log_info(). It's not\nreally equivalent, but it's probably the closest thing that currently\nexists, and I think it's good enough for what's basically a debugging\nmessage. See 0002.\n\nI think those changes would also be enough to allow simplehash to be\nused in a dynamic shared area (DSA). Using it in the main shared\nmemory segment seems more problematic, because simplehash relies on\nbeing able to resize the hash table. Shared hash tables must have a\nfixed maximum size, but with dynahash, we can count on being able to\nuse all of the entries without significant performance degradation.\nsimplehash, on the other hand, uses linear probing and relies on being\nable to grow the hash table as a way of escaping collisions. By\ndefault, the load factor is not permitted to drop below 0.1, so to\nmimic the collision-avoidance behavior that we get in backend-private\nuses of simplehash, we'd have to overallocate by 10x, which doesn't\nseem desirable.\n\nI'd really like to have an alternative to dynahash, which is awkward\nto use and probably not particularly fast, but I'm not sure simplehash\nis it. Maybe what we really need is a third (or nth) hash table\nimplementation.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 10 Dec 2019 13:07:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "allowing broader use of simplehash"
},
{
"msg_contents": "Hi,\n\nNeat!\n\nOn 2019-12-10 13:07:02 -0500, Robert Haas wrote:\n> I recently became annoyed while working on patch A that I could not\n> use simplehash in shared memory, and then I became annoyed again while\n> working on patch B that I could not use simplehash in frontend code.\n> So here are a few patches for discussion.\n\nI wanted to use it in frontend code a couple times as well.\n\n\n> A significant problem in either case is that a simplehash wants to\n> live in a memory context; no such thing exists either for data in\n> shared memory nor in frontend code. However, it seems to be quite easy\n> to provide a way for simplehash to be defined so that it doesn't care\n> about memory contexts. See 0001.\n\nI wonder if we shouldn't instead just go for an \"implicit\" memory\ncontext instead. It's a bit ugly to have a growing set of different\nsignatures.\n\n\n> As far as frontend code goes, the only other problem I found is that\n> it makes use of elog() to signal some internal-ish messages. It seemed\n> to me that the easiest thing to do was, if FRONTEND is defined, use\n> pg_log_error(...) instead of elog(ERROR, ...). For the one instance of\n> elog(LOG, ...) in simplehash.h, I chose to use pg_log_info(). It's not\n> really equivalent, but it's probably the closest thing that currently\n> exists, and I think it's good enough for what's basically a debugging\n> message. See 0002.\n\nYea, I think that's fine.\n\n\n> I think those changes would also be enough to allow simplehash to be\n> used in a dynamic shared area (DSA). Using it in the main shared\n> memory segment seems more problematic, because simplehash relies on\n> being able to resize the hash table. Shared hash tables must have a\n> fixed maximum size, but with dynahash, we can count on being able to\n> use all of the entries without significant performance degradation.\n> simplehash, on the other hand, uses linear probing and relies on being\n> able to grow the hash table as a way of escaping collisions. By\n> default, the load factor is not permitted to drop below 0.1, so to\n> mimic the collision-avoidance behavior that we get in backend-private\n> uses of simplehash, we'd have to overallocate by 10x, which doesn't\n> seem desirable.\n\nIt'd be fine to set SH_GROW_MIN_FILLFACTOR to something higher, for many\nuses. I've only added that after the fact, because somebody demonstrated\na workload with SQL level data that had a *lot* of conflicts with our\nhash functions. But that shouldn't be a concern for most other uses.\n\n\n> I'd really like to have an alternative to dynahash, which is awkward\n> to use and probably not particularly fast, but I'm not sure simplehash\n> is it. Maybe what we really need is a third (or nth) hash table\n> implementation.\n\nI think it depends a bit on what use-cases you'd like to cover? I think\nthere's unfortunately a lot of tradeoffs here that are hard to square:\n\n1) For performance, we want the hashtable code to be specialized for the\n specific key/value combination. I'm not aware of a way to do that\n without some code generation thing like simplehash. Being able to use\n simpler pointer math by having fixed sizes, and avoiding indirect\n function calls, is important for performance.\n\n It's fairly annoying to have to do the song-and-dance for simplehash\n when it's just a local lookup table or something.\n\n2) For performance, using a chained hashtable turns out to be\n problematic, if the hashtable will often get big (for small tables\n the CPU caches makes it ok). It's hard to avoid reallocations\n (and/or smoother growth) for non-chaining tables however.\n\n3) For lots of one-off uses of hashtables that aren't performance\n critical, we want a *simple* API. That IMO would mean that key/value\n end up being separately allocated pointers, and that just a\n comparator is provided when creating the hashtable.\n\n4) For some hashtables it's important to be very concurrent - but it's\n considerably harder to do that with an open addressing one.\n\nWhile I don't think it's possible to avoid compromise on all these\naspects, I think it'd be a lot more realistic to have one implementation\nfulfilling most needs (except perhaps the concurrency part) if we didn't\nhave the limitations of C. This kind of thing really is one where\ne.g. C++ style templates are just extremely hard to beat in C.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Dec 2019 13:59:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: allowing broader use of simplehash"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n> Neat!\n\nThanks.\n\n> > A significant problem in either case is that a simplehash wants to\n> > live in a memory context; no such thing exists either for data in\n> > shared memory nor in frontend code. However, it seems to be quite easy\n> > to provide a way for simplehash to be defined so that it doesn't care\n> > about memory contexts. See 0001.\n>\n> I wonder if we shouldn't instead just go for an \"implicit\" memory\n> context instead. It's a bit ugly to have a growing set of different\n> signatures.\n\nI don't really know what you mean by this. I don't actually think the\ndifferent signatures are a big deal. It affects a pretty limited\nnumber of functions, and that seems less ugly than trying to create\nsome sort of dummy not-really-a-context object that can live in\nfrontend code, and a lot easier than actually making contexts work in\nfrontend code. The latter might be the better long-term solution, but\nI don't think we should insist on doing it first.\n\nAnother way forward would be to replace the MemoryContext references\nwith a void * that happens, in the case of the backend, to be a\nMemoryContext, and could be NULL when none is required. However, that\nwould give up some type-checking for no current benefit. If simplehash\nbecomes more widely used and at some point it's clear that this would\nbe a net win, we can change it then.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Dec 2019 10:05:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing broader use of simplehash"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n> 3) For lots of one-off uses of hashtables that aren't performance\n> critical, we want a *simple* API. That IMO would mean that key/value\n> end up being separately allocated pointers, and that just a\n> comparator is provided when creating the hashtable.\n\nI think the simplicity of the API is a key point. Some things that are\nbothersome about dynahash:\n\n- It knows about memory contexts and insists on having its own.\n- You can't just use a hash table in shared memory; you have to\n\"attach\" to it first and have an object in backend-private memory.\n- The usual way of getting a shared hash table is ShmemInitHash(), but\nthat means that the hash table has its own named chunk and that it's\nin the main shared memory segment. If you want to put it inside\nanother chunk or put it in DSM or whatever, it doesn't work.\n- It knows about LWLocks and if it's a shared table it needs its own\ntranche of them.\n- hash_search() is hard to wrap your head around.\n\nOne thing I dislike about simplehash is that the #define-based\ninterface is somewhat hard to use. It's not that it's a bad design.\nIt's just you have to sit down and think for a while to figure out\nwhich things you need to #define in order to get it to do what you\nwant. I'm not sure that's something that can or needs to be fixed, but\nit's something to consider. Even dynahash, as annoying as it is, is in\nsome ways easier to get up and running.\n\nProbably the two most common uses cases are: (1) a fixed-sized shared\nmemory hash table of fixed-size entries where the key is the first N\nbytes of the entry and it never grows, or (2) a backend-private or\nperhaps frontend hash table of fixed-size entries where the key is the\nfirst N bytes of the entry, and it grows without limit. I think should\nconsider having specialized APIs for those two cases and then more\ngeneral APIs that you can use when that's not enough.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Dec 2019 10:50:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing broader use of simplehash"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-11 10:05:00 -0500, Robert Haas wrote:\n> On Tue, Dec 10, 2019 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > > A significant problem in either case is that a simplehash wants to\n> > > live in a memory context; no such thing exists either for data in\n> > > shared memory nor in frontend code. However, it seems to be quite easy\n> > > to provide a way for simplehash to be defined so that it doesn't care\n> > > about memory contexts. See 0001.\n> >\n> > I wonder if we shouldn't instead just go for an \"implicit\" memory\n> > context instead. It's a bit ugly to have a growing set of different\n> > signatures.\n> \n> I don't really know what you mean by this. I don't actually think the\n> different signatures are a big deal. It affects a pretty limited\n> number of functions, and that seems less ugly than trying to create\n> some sort of dummy not-really-a-context object that can live in\n> frontend code, and a lot easier than actually making contexts work in\n> frontend code. The latter might be the better long-term solution, but\n> I don't think we should insist on doing it first.\n\nI was basically just thinking that we could pass the context to use via\nCurrentMemoryContext, instead of explicitly passing it in.\n\n\n> Another way forward would be to replace the MemoryContext references\n> with a void * that happens, in the case of the backend, to be a\n> MemoryContext, and could be NULL when none is required. However, that\n> would give up some type-checking for no current benefit. If simplehash\n> becomes more widely used and at some point it's clear that this would\n> be a net win, we can change it then.\n\nYea, that seems worse. I'd rather work on the MemoryContext\ninfrastructure being available for frontend code.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Dec 2019 11:33:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: allowing broader use of simplehash"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-11 10:50:16 -0500, Robert Haas wrote:\n> On Tue, Dec 10, 2019 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > 3) For lots of one-off uses of hashtables that aren't performance\n> > critical, we want a *simple* API. That IMO would mean that key/value\n> > end up being separately allocated pointers, and that just a\n> > comparator is provided when creating the hashtable.\n> \n> I think the simplicity of the API is a key point. Some things that are\n> bothersome about dynahash:\n> \n> - It knows about memory contexts and insists on having its own.\n\nWhich is a waste, in a good number of cases.\n\n\n> - You can't just use a hash table in shared memory; you have to\n> \"attach\" to it first and have an object in backend-private memory.\n\nI'm not quite sure there's all that good an alternative to this,\ntbh. For efficiency it's useful to have backend-local state, I\nthink. And I don't really see how to have that without needing to attach.\n\n\n> - The usual way of getting a shared hash table is ShmemInitHash(), but\n> that means that the hash table has its own named chunk and that it's\n> in the main shared memory segment. If you want to put it inside\n> another chunk or put it in DSM or whatever, it doesn't work.\n\nI don't think it's quite realistic for the same implementation - although\nthe code could partially be shared and just specialized for both cases -\nto be used for DSM and \"normal\" shared memory. That's however not an\nexcuse to have drastically different interfaces for both.\n\n\n\n> - It knows about LWLocks and if it's a shared table it needs its own\n> tranche of them.\n> - hash_search() is hard to wrap your head around.\n>\n\n> One thing I dislike about simplehash is that the #define-based\n> interface is somewhat hard to use. It's not that it's a bad design.\n\nI agree. It's the best I could come up taking the limitations of C into\naccount, when focusing on speed and type safety. I really think this\ntype of hack is a stopgap measure, and we ought to upgrade to a subset\nof C++.\n\n\n> It's just you have to sit down and think for a while to figure out\n> which things you need to #define in order to get it to do what you\n> want. I'm not sure that's something that can or needs to be fixed, but\n> it's something to consider. Even dynahash, as annoying as it is, is in\n> some ways easier to get up and running.\n\nI have been wondering about providing one simplehash wrapper in a\ncentral place that uses simplehash to store a {key*, value*}, and has a\ncreation interface that just accepts a comparator. Plus a few wrapper\ncreation functions for specific types (e.g. string, oid, int64). While\nwe'd not want to use that for really performance critical paths, for 80%\nof the cases it'd be sufficient.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Dec 2019 11:51:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: allowing broader use of simplehash"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 2:33 PM Andres Freund <andres@anarazel.de> wrote:\n> I was basically just thinking that we could pass the context to use via\n> CurrentMemoryContext, instead of explicitly passing it in.\n\nI thought about that, but as a general rule, replacing a function\nparameter with a global variable is the wrong direction. One could\nargue this particular case is a counterexample, and I won't fight\ntooth and nail if you want to take that position, but I don't think I\nbelieve it myself.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 14 Dec 2019 22:24:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing broader use of simplehash"
},
{
"msg_contents": "On Sat, Dec 14, 2019 at 10:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Dec 12, 2019 at 2:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > I was basically just thinking that we could pass the context to use via\n> > CurrentMemoryContext, instead of explicitly passing it in.\n>\n> I thought about that, but as a general rule, replacing a function\n> parameter with a global variable is the wrong direction. One could\n> argue this particular case is a counterexample, and I won't fight\n> tooth and nail if you want to take that position, but I don't think I\n> believe it myself.\n\nAfter confirming with Andres that he didn't have an objection to me\npressing forward with these patches, I have committed them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Dec 2019 14:17:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing broader use of simplehash"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\nI noticed that pg_control_init() is failing an assertion on 13devel:\r\n\r\n TRAP: FailedAssertion(\"(atti->attalign) == 's'\", File: \"heaptuple.c\", Line: 160)\r\n postgres: bossartn postgres [local] SELECT(ExceptionalCondition+0x55)[0x8e02b5]\r\n postgres: bossartn postgres [local] SELECT(heap_compute_data_size+0x224)[0x486eb4]\r\n postgres: bossartn postgres [local] SELECT(heap_form_tuple+0x82)[0x488262]\r\n postgres: bossartn postgres [local] SELECT(pg_control_init+0x259)[0x902ca9]\r\n postgres: bossartn postgres [local] SELECT(ExecMakeTableFunctionResult+0x345)[0x656a65]\r\n postgres: bossartn postgres [local] SELECT[0x667e03]\r\n postgres: bossartn postgres [local] SELECT(ExecScan+0x3ab)[0x6572db]\r\n postgres: bossartn postgres [local] SELECT(standard_ExecutorRun+0x137)[0x64f097]\r\n postgres: bossartn postgres [local] SELECT[0x7c5d8b]\r\n postgres: bossartn postgres [local] SELECT(PortalRun+0x1d1)[0x7c71c1]\r\n postgres: bossartn postgres [local] SELECT[0x7c34d1]\r\n postgres: bossartn postgres [local] SELECT(PostgresMain+0x624)[0x7c4aa4]\r\n postgres: bossartn postgres [local] SELECT[0x74109a]\r\n postgres: bossartn postgres [local] SELECT(main+0x440)[0x69f220]\r\n /lib64/libc.so.6(__libc_start_main+0xfd)[0x7fbb78309d1d]\r\n postgres: bossartn postgres [local] SELECT[0x47b039]\r\n 2019-12-10 20:02:03.100 UTC [20859] LOG: server process (PID 20868) was terminated by signal 6: Aborted\r\n 2019-12-10 20:02:03.100 UTC [20859] DETAIL: Failed process was running: SELECT * FROM pg_control_init();\r\n 2019-12-10 20:02:03.100 UTC [20859] LOG: terminating any other active server processes\r\n\r\nThe attached patch seems to clear this up. I think this was missed in\r\n2e4db241.\r\n\r\nNathan",
"msg_date": "Tue, 10 Dec 2019 21:11:12 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "pg_control_init() bug"
},
{
"msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> I noticed that pg_control_init() is failing an assertion on 13devel:\n\nHmm, yeah. In a non-assert build I get\n\nregression=# select * from pg_control_init();\nERROR: function return row and query-specified return row do not match\nDETAIL: Returned row contains 12 attributes, but query expects 11.\n\n> The attached patch seems to clear this up. I think this was missed in\n> 2e4db241.\n\nEvidently. Thanks for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Dec 2019 17:46:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_control_init() bug"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 05:46:48PM -0500, Tom Lane wrote:\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> > I noticed that pg_control_init() is failing an assertion on 13devel:\n> \n> Hmm, yeah. In a non-assert build I get\n> \n> regression=# select * from pg_control_init();\n> ERROR: function return row and query-specified return row do not match\n> DETAIL: Returned row contains 12 attributes, but query expects 11.\n> \n> > The attached patch seems to clear this up. I think this was missed in\n> > 2e4db241.\n> \n> Evidently. Thanks for the report!\n\nFYI, this has been applied:\n\n\tFix tuple column count in pg_control_init().\n\t\n\tOversight in commit 2e4db241b.\n\t\n\tNathan Bossart\n\t\n\tDiscussion: https://postgr.es/m/1B616360-396A-4482-AA28-375566C86160@amazon.com\n\t\n\tBranch\n\t------\n\tmaster\n\t\n\tDetails\n\t-------\n\thttps://git.postgresql.org/pg/commitdiff/8729fa72483f8a9acf299508bb2cbae1aa9a29b8\n\t\n\tModified Files\n\t--------------\n\tsrc/backend/utils/misc/pg_controldata.c | 2 +-\n\t1 file changed, 1 insertion(+), 1 deletion(-)\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 21 Dec 2019 15:42:27 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_control_init() bug"
}
] |
[
{
"msg_contents": "Hello PG hackers;\n\nI'm writing an extension running on background workers and found\nget_database_name() causes SEGV and found internally resource owner was wet\nto NULL. Could anybody let me know how it happens and how I can use this\nfunction. Argument to get_database_name() looks correct.\n\nRegards;\n---\nKoichi Suzuki\n\nHello PG hackers;I'm writing an extension running on background workers and found get_database_name() causes SEGV and found internally resource owner was wet to NULL. Could anybody let me know how it happens and how I can use this function. Argument to get_database_name() looks correct.Regards;---Koichi Suzuki",
"msg_date": "Wed, 11 Dec 2019 15:38:03 +0900",
"msg_from": "Koichi Suzuki <koichi@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "get_database_name() from background worker"
},
{
"msg_contents": "From: Koichi Suzuki <koichi@2ndquadrant.com>\r\n> I'm writing an extension running on background workers and found\r\n> get_database_name() causes SEGV and found internally resource owner was\r\n> wet to NULL. Could anybody let me know how it happens and how I can use\r\n> this function. Argument to get_database_name() looks correct.\r\n\r\nDid you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering the background worker?\r\nDid you start transaction by calling StartTransactionCommand()?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Wed, 11 Dec 2019 07:21:29 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: get_database_name() from background worker"
},
{
"msg_contents": "Hi\r\n\tI would like to know : Are you using pg_background extension to work with backgroud workers ?\r\n\r\n\tThanks in advance\r\n\r\nBest Regards\r\n\r\nDidier ROS\r\nExpertise SGBD\r\nEDF - DTEO - DSIT - IT DMA\r\n\r\n\r\n-----Message d'origine-----\r\nDe : tsunakawa.takay@fujitsu.com [mailto:tsunakawa.takay@fujitsu.com] \r\nEnvoyé : mercredi 11 décembre 2019 08:21\r\nÀ : 'Koichi Suzuki' <koichi@2ndquadrant.com>\r\nCc : pgsql-hackers@lists.postgresql.org\r\nObjet : RE: get_database_name() from background worker\r\n\r\nFrom: Koichi Suzuki <koichi@2ndquadrant.com>\r\n> I'm writing an extension running on background workers and found\r\n> get_database_name() causes SEGV and found internally resource owner was\r\n> wet to NULL. Could anybody let me know how it happens and how I can use\r\n> this function. Argument to get_database_name() looks correct.\r\n\r\nDid you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering the background worker?\r\nDid you start transaction by calling StartTransactionCommand()?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n",
"msg_date": "Wed, 11 Dec 2019 07:26:29 +0000",
"msg_from": "ROS Didier <didier.ros@edf.fr>",
"msg_from_op": false,
"msg_subject": "RE: get_database_name() from background worker"
},
{
"msg_contents": "I'm not using this. Is this the must to use get_database_name()?\n---\nKoichi Suzuki\n\n\n2019年12月11日(水) 16:26 ROS Didier <didier.ros@edf.fr>:\n\n> Hi\n> I would like to know : Are you using pg_background extension to\n> work with backgroud workers ?\n>\n> Thanks in advance\n>\n> Best Regards\n>\n> Didier ROS\n> Expertise SGBD\n> EDF - DTEO - DSIT - IT DMA\n>\n>\n> -----Message d'origine-----\n> De : tsunakawa.takay@fujitsu.com [mailto:tsunakawa.takay@fujitsu.com]\n> Envoyé : mercredi 11 décembre 2019 08:21\n> À : 'Koichi Suzuki' <koichi@2ndquadrant.com>\n> Cc : pgsql-hackers@lists.postgresql.org\n> Objet : RE: get_database_name() from background worker\n>\n> From: Koichi Suzuki <koichi@2ndquadrant.com>\n> > I'm writing an extension running on background workers and found\n> > get_database_name() causes SEGV and found internally resource owner was\n> > wet to NULL. Could anybody let me know how it happens and how I can use\n> > this function. Argument to get_database_name() looks correct.\n>\n> Did you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering\n> the background worker?\n> Did you start transaction by calling StartTransactionCommand()?\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n>\n>\n>\n> Ce message et toutes les pièces jointes (ci-après le 'Message') sont\n> établis à l'intention exclusive des destinataires et les informations qui y\n> figurent sont strictement confidentielles. Toute utilisation de ce Message\n> non conforme à sa destination, toute diffusion ou toute publication totale\n> ou partielle, est interdite sauf autorisation expresse.\n>\n> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de\n> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou\n> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de\n> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace\n> sur quelque support que ce soit. Nous vous remercions également d'en\n> avertir immédiatement l'expéditeur par retour du message.\n>\n> Il est impossible de garantir que les communications par messagerie\n> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute\n> erreur ou virus.\n> ____________________________________________________\n>\n> This message and any attachments (the 'Message') are intended solely for\n> the addressees. The information contained in this Message is confidential.\n> Any use of information contained in this Message not in accord with its\n> purpose, any dissemination or disclosure, either whole or partial, is\n> prohibited except formal approval.\n>\n> If you are not the addressee, you may not copy, forward, disclose or use\n> any part of it. If you have received this message in error, please delete\n> it and all copies from your system and notify the sender immediately by\n> return message.\n>\n> E-mail communication cannot be guaranteed to be timely secure, error or\n> virus-free.\n>\n\nI'm not using this. Is this the must to use get_database_name()?---Koichi Suzuki2019年12月11日(水) 16:26 ROS Didier <didier.ros@edf.fr>:Hi\n I would like to know : Are you using pg_background extension to work with backgroud workers ?\n\n Thanks in advance\n\nBest Regards\n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\n\n\n-----Message d'origine-----\nDe : tsunakawa.takay@fujitsu.com [mailto:tsunakawa.takay@fujitsu.com] \nEnvoyé : mercredi 11 décembre 2019 08:21\nÀ : 'Koichi Suzuki' <koichi@2ndquadrant.com>\nCc : pgsql-hackers@lists.postgresql.org\nObjet : RE: get_database_name() from background worker\n\nFrom: Koichi Suzuki <koichi@2ndquadrant.com>\n> I'm writing an extension running on background workers and found\n> get_database_name() causes SEGV and found internally resource owner was\n> wet to NULL. Could anybody let me know how it happens and how I can use\n> this function. Argument to get_database_name() looks correct.\n\nDid you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering the background worker?\nDid you start transaction by calling StartTransactionCommand()?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Wed, 11 Dec 2019 17:16:11 +0900",
"msg_from": "Koichi Suzuki <koichi@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: get_database_name() from background worker"
},
{
"msg_contents": "Not using this extension, sorry.\n---\nKoichi Suzuki\n\n\n2019年12月11日(水) 16:26 ROS Didier <didier.ros@edf.fr>:\n\n> Hi\n> I would like to know : Are you using pg_background extension to\n> work with backgroud workers ?\n>\n> Thanks in advance\n>\n> Best Regards\n>\n> Didier ROS\n> Expertise SGBD\n> EDF - DTEO - DSIT - IT DMA\n>\n>\n> -----Message d'origine-----\n> De : tsunakawa.takay@fujitsu.com [mailto:tsunakawa.takay@fujitsu.com]\n> Envoyé : mercredi 11 décembre 2019 08:21\n> À : 'Koichi Suzuki' <koichi@2ndquadrant.com>\n> Cc : pgsql-hackers@lists.postgresql.org\n> Objet : RE: get_database_name() from background worker\n>\n> From: Koichi Suzuki <koichi@2ndquadrant.com>\n> > I'm writing an extension running on background workers and found\n> > get_database_name() causes SEGV and found internally resource owner was\n> > wet to NULL. Could anybody let me know how it happens and how I can use\n> > this function. Argument to get_database_name() looks correct.\n>\n> Did you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering\n> the background worker?\n> Did you start transaction by calling StartTransactionCommand()?\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n>\n>\n>\n> Ce message et toutes les pièces jointes (ci-après le 'Message') sont\n> établis à l'intention exclusive des destinataires et les informations qui y\n> figurent sont strictement confidentielles. Toute utilisation de ce Message\n> non conforme à sa destination, toute diffusion ou toute publication totale\n> ou partielle, est interdite sauf autorisation expresse.\n>\n> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de\n> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou\n> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de\n> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace\n> sur quelque support que ce soit. Nous vous remercions également d'en\n> avertir immédiatement l'expéditeur par retour du message.\n>\n> Il est impossible de garantir que les communications par messagerie\n> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute\n> erreur ou virus.\n> ____________________________________________________\n>\n> This message and any attachments (the 'Message') are intended solely for\n> the addressees. The information contained in this Message is confidential.\n> Any use of information contained in this Message not in accord with its\n> purpose, any dissemination or disclosure, either whole or partial, is\n> prohibited except formal approval.\n>\n> If you are not the addressee, you may not copy, forward, disclose or use\n> any part of it. If you have received this message in error, please delete\n> it and all copies from your system and notify the sender immediately by\n> return message.\n>\n> E-mail communication cannot be guaranteed to be timely secure, error or\n> virus-free.\n>\n\nNot using this extension, sorry.---Koichi Suzuki2019年12月11日(水) 16:26 ROS Didier <didier.ros@edf.fr>:Hi\n I would like to know : Are you using pg_background extension to work with backgroud workers ?\n\n Thanks in advance\n\nBest Regards\n\nDidier ROS\nExpertise SGBD\nEDF - DTEO - DSIT - IT DMA\n\n\n-----Message d'origine-----\nDe : tsunakawa.takay@fujitsu.com [mailto:tsunakawa.takay@fujitsu.com] \nEnvoyé : mercredi 11 décembre 2019 08:21\nÀ : 'Koichi Suzuki' <koichi@2ndquadrant.com>\nCc : pgsql-hackers@lists.postgresql.org\nObjet : RE: get_database_name() from background worker\n\nFrom: Koichi Suzuki <koichi@2ndquadrant.com>\n> I'm writing an extension running on background workers and found\n> get_database_name() causes SEGV and found internally resource owner was\n> wet to NULL. Could anybody let me know how it happens and how I can use\n> this function. Argument to get_database_name() looks correct.\n\nDid you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering the background worker?\nDid you start transaction by calling StartTransactionCommand()?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Wed, 11 Dec 2019 17:17:00 +0900",
"msg_from": "Koichi Suzuki <koichi@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: get_database_name() from background worker"
},
{
"msg_contents": "So, What are you using to create background workers ? Can you show me an extract of your code ?\r\nTIA\r\nBest Regards\r\nDidier\r\n\r\nDe : koichi@2ndquadrant.com [mailto:koichi@2ndquadrant.com]\r\nEnvoyé : mercredi 11 décembre 2019 09:16\r\nÀ : ROS Didier <didier.ros@edf.fr>\r\nCc : tsunakawa.takay@fujitsu.com; pgsql-hackers@lists.postgresql.org\r\nObjet : Re: get_database_name() from background worker\r\n\r\nI'm not using this. Is this the must to use get_database_name()?\r\n---\r\nKoichi Suzuki\r\n\r\n\r\n2019年12月11日(水) 16:26 ROS Didier <didier.ros@edf.fr<mailto:didier.ros@edf.fr>>:\r\nHi\r\n I would like to know : Are you using pg_background extension to work with backgroud workers ?\r\n\r\n Thanks in advance\r\n\r\nBest Regards\r\n\r\nDidier ROS\r\nExpertise SGBD\r\nEDF - DTEO - DSIT - IT DMA\r\n\r\n\r\n-----Message d'origine-----\r\nDe : tsunakawa.takay@fujitsu.com<mailto:tsunakawa.takay@fujitsu.com> [mailto:tsunakawa.takay@fujitsu.com<mailto:tsunakawa.takay@fujitsu.com>]\r\nEnvoyé : mercredi 11 décembre 2019 08:21\r\nÀ : 'Koichi Suzuki' <koichi@2ndquadrant.com<mailto:koichi@2ndquadrant.com>>\r\nCc : pgsql-hackers@lists.postgresql.org<mailto:pgsql-hackers@lists.postgresql.org>\r\nObjet : RE: get_database_name() from background worker\r\n\r\nFrom: Koichi Suzuki <koichi@2ndquadrant.com<mailto:koichi@2ndquadrant.com>>\r\n> I'm writing an extension running on background workers and found\r\n> get_database_name() causes SEGV and found internally resource owner was\r\n> wet to NULL. Could anybody let me know how it happens and how I can use\r\n> this function. Argument to get_database_name() looks correct.\r\n\r\nDid you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering the background worker?\r\nDid you start transaction by calling StartTransactionCommand()?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n\r\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\r\n\r\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\r\n\r\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\r\n____________________________________________________\r\n\r\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\r\n\r\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\r\n\r\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\n\n\n\n\nSo, What are you using to create background workers ? Can you show me an extract of your code ?\nTIA\nBest Regards\nDidier\n \nDe : koichi@2ndquadrant.com [mailto:koichi@2ndquadrant.com]\r\n\nEnvoyé : mercredi 11 décembre 2019 09:16\nÀ : ROS Didier <didier.ros@edf.fr>\nCc : tsunakawa.takay@fujitsu.com; pgsql-hackers@lists.postgresql.org\nObjet : Re: get_database_name() from background worker\n \n\nI'm not using this. Is this the must to use get_database_name()?\n\n\n\n\n---\n\nKoichi Suzuki\n\n\n\n\n \n\n \n\n\n2019年12月11日(水) 16:26\r\n ROS Didier <didier.ros@edf.fr>:\n\n\nHi\r\n I would like to know : Are you using pg_background extension to work with backgroud workers ?\n\r\n Thanks in advance\n\r\nBest Regards\n\r\nDidier ROS\r\nExpertise SGBD\r\nEDF - DTEO - DSIT - IT DMA\n\n\r\n-----Message d'origine-----\r\nDe : tsunakawa.takay@fujitsu.com [mailto:tsunakawa.takay@fujitsu.com]\r\n\r\nEnvoyé : mercredi 11 décembre 2019 08:21\r\nÀ : 'Koichi Suzuki' <koichi@2ndquadrant.com>\r\nCc : pgsql-hackers@lists.postgresql.org\r\nObjet : RE: get_database_name() from background worker\n\r\nFrom: Koichi Suzuki <koichi@2ndquadrant.com>\r\n> I'm writing an extension running on background workers and found\r\n> get_database_name() causes SEGV and found internally resource owner was\r\n> wet to NULL. Could anybody let me know how it happens and how I can use\r\n> this function. Argument to get_database_name() looks correct.\n\r\nDid you specify BGWORKER_BACKGROUND_DATABASE_CONNECTION when registering the background worker?\r\nDid you start transaction by calling StartTransactionCommand()?\n\n\r\nRegards\r\nTakayuki Tsunakawa\n\n\n\n\n\r\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute\r\n diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\r\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies,\r\n et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\r\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\r\n____________________________________________________\n\r\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure,\r\n either whole or partial, is prohibited except formal approval.\n\r\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\r\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Wed, 11 Dec 2019 08:54:45 +0000",
"msg_from": "ROS Didier <didier.ros@edf.fr>",
"msg_from_op": false,
"msg_subject": "RE: get_database_name() from background worker"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 05:17:00PM +0900, Koichi Suzuki wrote:\n> Not using this extension, sorry.\n\nI have no idea what you are trying to do, but get_database_name()\naccesses the system cache, which means two things:\n- The access needs to be done in the context of a transaction. That's\na trick we use in a couple of places in core, see for example\nIdentifySystem() in walsender.c which looks for the database name. In\nthis case, you need to do the call in-between StartTransactionCommand\nand CommitTransactionCommand, and you should make sure that the memory\ncontext does not point to the one of the transaction to have an access\nto the syscache data after committing the inner transaction used for\nthe lookup.\n- Your background worker needs a database access, so bgw_flags needs\nto be BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION,\nand also make sure to use BackgroundWorkerInitializeConnection at the\nbeginning of the main loop of your worker.\n\nHope that helps.\n--\nMichael",
"msg_date": "Wed, 11 Dec 2019 20:06:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: get_database_name() from background worker"
},
{
"msg_contents": "From: Koichi Suzuki <koichi@2ndquadrant.com>\r\n> I'm not using this. Is this the must to use get_database_name()?\r\n\r\nI don't think pg_background is a must, but the system catalog access by get_database_name() should require database connection and transaction. See src/test/modules/worker_spi/worker_spi.c for an example of background worker. That uses both of them.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Thu, 12 Dec 2019 01:04:28 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: get_database_name() from background worker"
},
{
"msg_contents": "On Wed, 11 Dec 2019 at 14:38, Koichi Suzuki <koichi@2ndquadrant.com> wrote:\n\n> Hello PG hackers;\n>\n> I'm writing an extension running on background workers and found\n> get_database_name() causes SEGV and found internally resource owner was wet\n> to NULL. Could anybody let me know how it happens and how I can use this\n> function. Argument to get_database_name() looks correct.\n>\n>\nI think the main question is answered; if the advice given does not help\nplease supply your code and a backtrace from the crash obtained from a core\nfile.\n\nHowever, this reminds me of something. I'd like to make our\nsyscache/relcache/catcache and all snapshot access functions\nAssert(IsTransactionState()); directly or at key locations. That'd make\nthese mistakes much more obvious - and as bgworkers become a more popular\nway to write code for PostgreSQL that's going to be important.\n\nSimilarly, it might make sense to assert that we have a valid snapshot in\nthe SPI, which we don't presently do for read-only SPI calls. I recall that\none biting me repeatedly when I was learning this stuff.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 11 Dec 2019 at 14:38, Koichi Suzuki <koichi@2ndquadrant.com> wrote:Hello PG hackers;I'm writing an extension running on background workers and found get_database_name() causes SEGV and found internally resource owner was wet to NULL. Could anybody let me know how it happens and how I can use this function. Argument to get_database_name() looks correct.I think the main question is answered; if the advice given does not help please supply your code and a backtrace from the crash obtained from a core file.However, this reminds me of something. I'd like to make our syscache/relcache/catcache and all snapshot access functions Assert(IsTransactionState()); directly or at key locations. That'd make these mistakes much more obvious - and as bgworkers become a more popular way to write code for PostgreSQL that's going to be important.Similarly, it might make sense to assert that we have a valid snapshot in the SPI, which we don't presently do for read-only SPI calls. I recall that one biting me repeatedly when I was learning this stuff. -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 12 Dec 2019 09:20:29 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: get_database_name() from background worker"
},
{
"msg_contents": "Hi\r\n With pg_background extension ,it is possible to make \"autonomous transaction\" which means possibility to commit in a transaction.\r\n It is like a client which connects to a postgresql instance. So you can execute any sql orders .\r\n\r\nBest Regards\r\nDidier ROS\r\n-----Message d'origine-----\r\nDe : tsunakawa.takay@fujitsu.com [mailto:tsunakawa.takay@fujitsu.com] \r\nEnvoyé : jeudi 12 décembre 2019 02:04\r\nÀ : 'Koichi Suzuki' <koichi@2ndquadrant.com>\r\nCc : ROS Didier <didier.ros@edf.fr>; pgsql-hackers@lists.postgresql.org\r\nObjet : RE: get_database_name() from background worker\r\n\r\nFrom: Koichi Suzuki <koichi@2ndquadrant.com>\r\n> I'm not using this. Is this the must to use get_database_name()?\r\n\r\nI don't think pg_background is a must, but the system catalog access by get_database_name() should require database connection and transaction. See src/test/modules/worker_spi/worker_spi.c for an example of background worker. That uses both of them.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n",
"msg_date": "Thu, 12 Dec 2019 08:21:03 +0000",
"msg_from": "ROS Didier <didier.ros@edf.fr>",
"msg_from_op": false,
"msg_subject": "RE: get_database_name() from background worker"
},
{
"msg_contents": "On Thu, 12 Dec 2019 at 16:21, ROS Didier <didier.ros@edf.fr> wrote:\n\n> Hi\n> With pg_background extension ,it is possible to make \"autonomous\n> transaction\" which means possibility to commit in a transaction.\n> It is like a client which connects to a postgresql instance. So you can\n> execute any sql orders .\n>\n>\nYes, that's possible. It's not easy though and I strongly suggest you look\ninto existing approaches like using dblink instead.\n\nPlease start a new thread rather than following an unrelated existing one.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 12 Dec 2019 at 16:21, ROS Didier <didier.ros@edf.fr> wrote:Hi\n With pg_background extension ,it is possible to make \"autonomous transaction\" which means possibility to commit in a transaction.\n It is like a client which connects to a postgresql instance. So you can execute any sql orders .Yes, that's possible. It's not easy though and I strongly suggest you look into existing approaches like using dblink instead.Please start a new thread rather than following an unrelated existing one. -- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 13 Dec 2019 13:01:24 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: get_database_name() from background worker"
}
] |
[
{
"msg_contents": "De: Stephen Frost\nEnviadas: Terça-feira, 10 de Dezembro de 2019 17:52\n\n>There's multiple ways to get there though and I think what you're seeing\n>is that the \"just change it to something else\" answer isn't necessairly\n>going to be viewed as an improvement (or, at least, not enough of an\n>improvement to accept the cost of the change).\nWell, I was trying to apply another non-implied rule, \"break nothing\".\n\n>Why not change the variables? Changes that also improve the code itself\n>along with eliminating the shadowing of the global variable are going to\n>be a lot easier to be accepted.\nContrary to what I was initially thinking, it seems to me that changing the names of global variables is more acceptable to the people of the project.\n\n>Sure, but have you looked at how it's used? Instead of just renaming\n>the numTables variables in the functions that accept it- could those\n>variables just be removed instead of changing their name to make it look\n>like they're something different when they aren't actually different?\nNo. I didn't look.\n\n>I've only spent a bit of time looking at it, but it sure looks like the\n>variables could just be removed, and doing so doesn't break the\n>regression tests, which supports the idea that maybe there's a better\n>way to deal with those particular variables rather than renaming them.\n>Another approach to consider might be to move some global variables into\n>structures that are then global with better names to indicate that's\n>what they are.\nIt does not seem reasonable to me what you are asking.\nBecause as I was told here and I agree in part. I do not have the necessary knowledge of structures and logic to propose big changes.\nFor the work I set out to, find bugs and make minor performance improvements, I believe, can contribute safely and without ruining anything.\nBy just changing the names of variables to something consistent and readable, the goal will be done without break anything.\nWho is best to make these changes, are the authors and reviewers. Once we no longer have the problem of shadow variables, we can turn on the alert without breaking automatic compilation, as Tom Lane is concerned.\nThat's why I'd like to fix all collisions of variables, even the dumbest.\n\n>In short, a hack-and-slash patch that doesn't really spend much time\n>considering the changes beyond \"let's just change these to be different\n>to avoid shadowing globals\" isn't really a good way to go about\n>addressing these cases and has a good chance of making things more\n>confusing, not less.\nThis is totally contrary to what I think about it.\n\nregards,\nRanier Vilela\n\n\n\n\n\n\n\nDe: Stephen Frost\n\nEnviadas: Terça-feira, 10 de Dezembro de 2019 17:52\n\n>There's multiple ways to get there though and I think what you're seeing\n>is that the \"just change it to something else\" answer isn't necessairly\n>going to be viewed as an improvement (or, at least, not enough of an\n>improvement to accept the cost of the change).\nWell, I was trying to apply another non-implied rule, \"break nothing\".\n\n>Why not change the variables? Changes that also improve the code itself\n>along with eliminating the shadowing of the global variable are going to\n>be a lot easier to be accepted.\nContrary to what I was initially thinking, it seems to me that changing the names of global variables is more acceptable to the people of the project.\n\n>Sure, but have you looked at how it's used? Instead of just renaming\n>the numTables variables in the functions that accept it- could those\n>variables just be removed instead of changing their name to make it look\n>like they're something different when they aren't actually different?\nNo. I didn't look.\n\n>I've only spent a bit of time looking at it, but it sure looks like the\n>variables could just be removed, and doing so doesn't break the\n>regression tests, which supports the idea that maybe there's a better\n>way to deal with those particular variables rather than renaming them.\n>Another approach to consider might be to move some global variables into\n>structures that are then global with better names to indicate that's\n>what they are.\nIt does not seem reasonable to me what you are asking.\n\nBecause as I was told here and I agree in part. I do not have the necessary knowledge of structures and logic to propose big changes.\n\nFor the work I set out to, find bugs and make minor performance improvements, I believe, can contribute safely and without ruining anything.\n\nBy just changing the names of variables to something consistent and readable, the goal will be done without break anything.\n\nWho is best to make these changes, are the authors and reviewers. Once we no longer have the problem of shadow variables, we can turn on the alert without breaking automatic compilation, as Tom Lane is concerned.\nThat's why I'd like to fix all collisions of variables, even the dumbest.\n\n>In short, a hack-and-slash patch that doesn't really spend much time\n>considering the changes beyond \"let's just change these to be different\n>to avoid shadowing globals\" isn't really a good way to go about\n>addressing these cases and has a good chance of making things more\n>confusing, not less.\nThis is totally contrary to what I think about it.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 11 Dec 2019 11:15:29 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Greetings,\n\nDidn't see this previously (it's our typical approach to 'reply-all' to\npeople), though I don't think it changes my feelings about the latest\nproposed patch.\n\n* Ranier Vilela (ranier_gyn@hotmail.com) wrote:\n> De: Stephen Frost\n> Enviadas: Terça-feira, 10 de Dezembro de 2019 17:52\n> \n> >There's multiple ways to get there though and I think what you're seeing\n> >is that the \"just change it to something else\" answer isn't necessairly\n> >going to be viewed as an improvement (or, at least, not enough of an\n> >improvement to accept the cost of the change).\n> Well, I was trying to apply another non-implied rule, \"break nothing\".\n\nI agree with not breaking things but that doesn't mean the only\nreasonable approach is to do the absolute minimum- you might not be\nbreaking something today, but it's going to confuse people later on down\nthe road and may lead to bugs being introduced due to that confusion, or\nat the very least will add to people's time to figure out what's really\ngoing on.\n\n> >Why not change the variables? Changes that also improve the code itself\n> >along with eliminating the shadowing of the global variable are going to\n> >be a lot easier to be accepted.\n> Contrary to what I was initially thinking, it seems to me that changing the names of global variables is more acceptable to the people of the project.\n\nI wasn't suggesting to change the names of the global variables in this\nspecific case, though I could see that being a better approach in some\ninstances- but it really depends. Each case needs to be reviewed and\nconsidered and the best approach taken.\n\n> >Sure, but have you looked at how it's used? Instead of just renaming\n> >the numTables variables in the functions that accept it- could those\n> >variables just be removed instead of changing their name to make it look\n> >like they're something different when they aren't actually different?\n> No. I didn't look.\n\nI think we need to be looking at the changes and considering them, and\nthe person proposing the changes should be doing that and not just\nexpecting everyone else to do so.\n\n> >I've only spent a bit of time looking at it, but it sure looks like the\n> >variables could just be removed, and doing so doesn't break the\n> >regression tests, which supports the idea that maybe there's a better\n> >way to deal with those particular variables rather than renaming them.\n> >Another approach to consider might be to move some global variables into\n> >structures that are then global with better names to indicate that's\n> >what they are.\n> It does not seem reasonable to me what you are asking.\n> Because as I was told here and I agree in part. I do not have the necessary knowledge of structures and logic to propose big changes.\n\nI'd suggest that we work through that then and get you up to speed on\nthe structures and logic- the pg_dump code is pretty ugly but the\nspecific usage of numTables isn't too bad. Each of these should be\nlooked at independently and thought about \"what's the right way to fix\nthis?\" The right way isn't necessairly to just rename the variables, as\nI was saying, and doing so may lead to more confusion, not less.\n\n> For the work I set out to, find bugs and make minor performance improvements, I believe, can contribute safely and without ruining anything.\n\nHaving shadowed globals, while kinda ugly, doesn't necessairly mean it's\na bug. I'm not sure what \"minor performance improvements\" are being\nclaimed here but there's a whole lot of work involved in demonstrating\nthat a change is a performance improvement.\n\n> By just changing the names of variables to something consistent and readable, the goal will be done without break anything.\n\nbut.. the changes you're proposing are making them inconsistent and\nconfusing when there isn't actually a difference between the global and\nthe local, it's just the somewhere along the way someone thought they\nneeded to pass in numTables when they really didn't, and we should go\nfix *that*, not rename the variable to something else to make someone\nlater on go \"wait, why did we need to pass in this variable? how is\nthis different from the global?\"\n\n> Who is best to make these changes, are the authors and reviewers. Once we no longer have the problem of shadow variables, we can turn on the alert without breaking automatic compilation, as Tom Lane is concerned.\n\nPerhaps I'm a bit confused, but it seems that you're the author of these\nspecific changes, and I'm trying to provide feedback as to how you can\nimprove what you're proposing in a way that will improve the code base\noverall and reduce the confusion while also eliminating the shadow\nvariables. If the author of a patch isn't open to this kind of review\nand willing to adjust the patch to improve it because they aren't sure\nthat their changes will be correct then they could at least post them\nback here and ask, or better, go look at the code and get a better\nunderstanding of what's going on to build confidence in the change.\n\nThe goal here also shouldn't be \"we just want to turn on this alert, so\nwe're going to make changes to the source without thinking just to\nappease the compiler\".\n\n> That's why I'd like to fix all collisions of variables, even the dumbest.\n\nI agree with fixing collisions, but not in a rote way like this.\n\n> >In short, a hack-and-slash patch that doesn't really spend much time\n> >considering the changes beyond \"let's just change these to be different\n> >to avoid shadowing globals\" isn't really a good way to go about\n> >addressing these cases and has a good chance of making things more\n> >confusing, not less.\n> This is totally contrary to what I think about it.\n\n-1 from me then on this whole thread of changes.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Dec 2019 10:34:38 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16159\nLogged by: Pavel Suderevsky\nEmail address: psuderevsky@gmail.com\nPostgreSQL version: 11.6\nOperating system: CentOS 7.6.1810 (3.10.0-957.el7.x86_64)\nDescription: \n\nReproduced 11.2,11.6.\r\n\r\nIf PostgreSQL starts recovery and finds a history file for a timeline that\nis higher than current one, it will request file with the segment for the\nfuture timeline (that most likely doesn't exist yet) and only then it will\nrequest file with the segment for current timeline. \r\nIf archive is located on remote storage it can take huge time to find that\nsegments for the future timelines are not exist yet and therefore recovery\ncan take too long.\r\n\r\nExample:\r\n\r\nrecovery.conf:\r\n>restore_command = 'echo -e \"Searching WAL: %f, location: %p\";\n/usr/bin/pgbackrest --stanza=platform archive-get %f \"%p\"'\r\n>recovery_target_timeline = 'latest'\r\n>standby_mode = 'on'\r\n\r\nPostgres log during startup:\r\n>\r\n> 2019-12-06 07:11:16 CST LOG: database system was shut down in recovery\r\n> at 2019-12-06 07:11:08 CST\r\n> Searching WAL: 00000022.history, location: pg_wal/RECOVERYHISTORY\r\n> 2019-12-06 07:11:16 CST LOG: restored log file \"00000022.history\" from\r\n> archive\r\n> Searching WAL: 00000023.history, location: pg_wal/RECOVERYHISTORY\r\n> 2019-12-06 07:11:16 CST LOG: entering standby mode\r\n> Searching WAL: 00000022.history, location: pg_wal/RECOVERYHISTORY\r\n> 2019-12-06 07:11:16 CST LOG: restored log file \"00000022.history\" from\r\n> archive\r\n> Searching WAL: 00000022000018C60000003F, location: pg_wal/RECOVERYXLOG\r\n> Searching WAL: 00000021000018C60000003F, location: pg_wal/RECOVERYXLOG\r\n> 2019-12-06 07:11:20 CST LOG: restored log file\r\n> \"00000021000018C60000003F\" from archive\r\n> Searching WAL: 00000021.history, location: pg_wal/RECOVERYHISTORY\r\n> 2019-12-06 07:11:20 CST LOG: restored log file \"00000021.history\" from\r\n> archive\r\n> Searching WAL: 00000022000018BF0000001B, location: pg_wal/RECOVERYXLOG\r\n> Searching WAL: 00000021000018BF0000001B, location: pg_wal/RECOVERYXLOG\r\n> 2019-12-06 07:11:27 CST LOG: restored log file\r\n> \"00000021000018BF0000001B\" from archive\r\n> 2019-12-06 07:11:27 CST LOG: redo starts at 18BF/1B311260\r\n> Searching WAL: 00000022000018BF0000001C, location: pg_wal/RECOVERYXLOG\r\n> Searching WAL: 00000021000018BF0000001C, location: pg_wal/RECOVERYXLOG\r\n> 2019-12-06 07:11:34 CST LOG: restored log file\r\n> \"00000021000018BF0000001C\" from archive\r\n> Searching WAL: 00000022000018BF0000001D, location: pg_wal/RECOVERYXLOG\r\n> Searching WAL: 00000021000018BF0000001D, location: pg_wal/RECOVERYXLOG\r\n> 2019-12-06 07:11:40 CST LOG: restored log file\r\n> \"00000021000018BF0000001D\" from archive\r\n> Searching WAL: 00000022000018BF0000001E, location: pg_wal/RECOVERYXLOG\r\n> Searching WAL: 00000021000018BF0000001E, location: pg_wal/RECOVERYXLOG\r\n> 2019-12-06 07:11:46 CST LOG: restored log file\r\n> \"00000021000018BF0000001E\" from archive\r\n> Searching WAL: 00000022000018BF0000001F, location: pg_wal/RECOVERYXLOG\r\n> Searching WAL: 00000021000018BF0000001F, location: pg_wal/RECOVERYXLOG\r\n> 2019-12-06 07:11:53 CST LOG: restored log file\r\n> \"00000021000018BF0000001F\" from archive\r\n\r\nAs you can see Postgres tries to restore 00000022* WALs before timeline\nswitch LSN has been reached while restoring 00000021*.",
"msg_date": "Wed, 11 Dec 2019 12:39:20 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16159: recovery requests WALs for the next timelines before\n timeline switch LSN has been reached"
},
{
"msg_contents": "Hello.\n\nIn short, it is not a bug.\n\nAt Wed, 11 Dec 2019 12:39:20 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in \n> The following bug has been logged on the website:\n> \n> Bug reference: 16159\n> Logged by: Pavel Suderevsky\n> Email address: psuderevsky@gmail.com\n> PostgreSQL version: 11.6\n> Operating system: CentOS 7.6.1810 (3.10.0-957.el7.x86_64)\n> Description: \n> \n> Reproduced 11.2,11.6.\n> \n> If PostgreSQL starts recovery and finds a history file for a timeline that\n> is higher than current one, it will request file with the segment for the\n> future timeline (that most likely doesn't exist yet) and only then it will\n> request file with the segment for current timeline.\n\nThe cause of the \"future\" timeline is that the standby has received\nthe history file for TLI=22 but has not completed replaying the first\ncheckpoint after promotion. In that case, WAL files before the\ntimeline switch should not exist for TLI=22 and PostgreSQL is making\nsure that by peeking the archive for the file.\n\nSince standby always starts archive recovery from the REDO location of\nthe last checkpoint performed on the standby(or the restart point),\nthe WAL amount to read is irrelevant to promotion.\n\n> If archive is located on remote storage it can take huge time to find that\n> segments for the future timelines are not exist yet and therefore recovery\n> can take too long.\n\nI don't think that peeking non-existent remote files takes comparable\namount of time to 16MB transfer. If the problem is the amount of WAL\nfiles to transfer during recovery, I came up of three ways to make\nstandby startup faster.\n\n1. For operational shutdown/restarts, make sure that the latest\n restart point is close enough to the replay location on the standby\n before shutting down. If not, manual checkpoint on the master then\n that on the standby would help. The functions pg_control_checkpoint()\n and pg_last_wal_replay_lsn() would work for checking that condition.\n\n2. PostgreSQL 11 accepts \"always\" for the archive_mode GUC setting. It\n enables standby-side archiving.\n\nhttps://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE\n\n3. Decrease max_wal_size or checkopint_timeout on the master, and/or\n decrease checkpoint_timeout on the standby. This decreases the\n amount of time needed during recovery.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 Dec 2019 12:48:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16159: recovery requests WALs for the next timelines\n before timeline switch LSN has been reached"
},
{
"msg_contents": "Hi,\n\nThank you very much for your explanation and sorry for delay with an\nanswer.\nBut for me it still seems that PostgreSQL has enough information to check\nthat no WALs exist for the new timeline to omit searching all the\npossibly-existing WALs.\n\n> 0000005300004AE1000000A3\n> rmgr: Standby len (rec/tot): 62/ 62, tx: 0, lsn:\n> 4AE1/A3000028, prev 4AE1/A28EC5F8, desc: RUNNING_XACTS nextXid 82249762\n> latestCompletedXid 82249758 oldestRunningXid 82249759; 1 xacts: 82249759\n> rmgr: XLOG len (rec/tot): 106/ 106, tx: 0, lsn:\n> 4AE1/A3000068, prev 4AE1/A3000028, desc: CHECKPOINT_SHUTDOWN redo\n> 4AE1/A3000068; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976;\n> multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1;\n> oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown\n> rmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn:\n> 4AE1/A30000D8, prev 4AE1/A3000068, desc: SWITCH\n> 0000005400004AE1000000A4\n> rmgr: XLOG len (rec/tot): 106/ 106, tx: 0, lsn:\n> 4AE1/A4000028, prev 4AE1/A30000D8, desc: CHECKPOINT_SHUTDOWN redo\n> 4AE1/A4000028; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976;\n> multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1;\n> oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown\n> rmgr: XLOG len (rec/tot): 42/ 42, tx: 0, lsn:\n> 4AE1/A4000098, prev 4AE1/A4000028, desc: END_OF_RECOVERY tli 84; prev tli\n> 83; time 2020-01-28 06:29:03.432938 CST\n> 00000054.history\n> 83 4AE1/A4000098 no recovery target specified\n>\nIt can just look through the first received new-timeline's WAL and ensure\ntimeline switch occured in this WAL. Finally, it can check archive for the\nonly one possibly-existing previous WAL.\n\nRegading influence: issue is not about the large amount of WALs to apply\nbut in searching for the non-existing WALs on the remote storage, each such\nsearch can take 5-10 seconds while obtaining existing WAL takes\nmilliseconds.\n\nRegards,\nPavel Suderevsky\n\nчт, 12 дек. 2019 г. в 06:49, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n\n> Hello.\n>\n> In short, it is not a bug.\n>\n> At Wed, 11 Dec 2019 12:39:20 +0000, PG Bug reporting form <\n> noreply@postgresql.org> wrote in\n> > The following bug has been logged on the website:\n> >\n> > Bug reference: 16159\n> > Logged by: Pavel Suderevsky\n> > Email address: psuderevsky@gmail.com\n> > PostgreSQL version: 11.6\n> > Operating system: CentOS 7.6.1810 (3.10.0-957.el7.x86_64)\n> > Description:\n> >\n> > Reproduced 11.2,11.6.\n> >\n> > If PostgreSQL starts recovery and finds a history file for a timeline\n> that\n> > is higher than current one, it will request file with the segment for the\n> > future timeline (that most likely doesn't exist yet) and only then it\n> will\n> > request file with the segment for current timeline.\n>\n> The cause of the \"future\" timeline is that the standby has received\n> the history file for TLI=22 but has not completed replaying the first\n> checkpoint after promotion. In that case, WAL files before the\n> timeline switch should not exist for TLI=22 and PostgreSQL is making\n> sure that by peeking the archive for the file.\n>\n> Since standby always starts archive recovery from the REDO location of\n> the last checkpoint performed on the standby(or the restart point),\n> the WAL amount to read is irrelevant to promotion.\n>\n> > If archive is located on remote storage it can take huge time to find\n> that\n> > segments for the future timelines are not exist yet and therefore\n> recovery\n> > can take too long.\n>\n> I don't think that peeking non-existent remote files takes comparable\n> amount of time to 16MB transfer. If the problem is the amount of WAL\n> files to transfer during recovery, I came up of three ways to make\n> standby startup faster.\n>\n> 1. For operational shutdown/restarts, make sure that the latest\n> restart point is close enough to the replay location on the standby\n> before shutting down. If not, manual checkpoint on the master then\n> that on the standby would help. The functions pg_control_checkpoint()\n> and pg_last_wal_replay_lsn() would work for checking that condition.\n>\n> 2. PostgreSQL 11 accepts \"always\" for the archive_mode GUC setting. It\n> enables standby-side archiving.\n>\n> https://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE\n>\n> 3. Decrease max_wal_size or checkopint_timeout on the master, and/or\n> decrease checkpoint_timeout on the standby. This decreases the\n> amount of time needed during recovery.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nHi,Thank you very much for your explanation and sorry for delay with an answer. But for me it still seems that PostgreSQL has enough information to check that no WALs exist for the new timeline to omit searching all the possibly-existing WALs.0000005300004AE1000000A3rmgr: Standby len (rec/tot): 62/ 62, tx: 0, lsn: 4AE1/A3000028, prev 4AE1/A28EC5F8, desc: RUNNING_XACTS nextXid 82249762 latestCompletedXid 82249758 oldestRunningXid 82249759; 1 xacts: 82249759rmgr: XLOG len (rec/tot): 106/ 106, tx: 0, lsn: 4AE1/A3000068, prev 4AE1/A3000028, desc: CHECKPOINT_SHUTDOWN redo 4AE1/A3000068; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976; multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdownrmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn: 4AE1/A30000D8, prev 4AE1/A3000068, desc: SWITCH 0000005400004AE1000000A4rmgr: XLOG len (rec/tot): 106/ 106, tx: 0, lsn: 4AE1/A4000028, prev 4AE1/A30000D8, desc: CHECKPOINT_SHUTDOWN redo 4AE1/A4000028; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976; multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdownrmgr: XLOG len (rec/tot): 42/ 42, tx: 0, lsn: 4AE1/A4000098, prev 4AE1/A4000028, desc: END_OF_RECOVERY tli 84; prev tli 83; time 2020-01-28 06:29:03.432938 CST00000054.history83 4AE1/A4000098 no recovery target specifiedIt can just look through the first received new-timeline's WAL and ensure timeline switch occured in this WAL. Finally, it can check archive for the only one possibly-existing previous WAL.Regading influence: issue is not about the large amount of WALs to apply but in searching for the non-existing WALs on the remote storage, each such search can take 5-10 seconds while obtaining existing WAL takes milliseconds.Regards,Pavel Suderevskyчт, 12 дек. 2019 г. в 06:49, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:Hello.\n\nIn short, it is not a bug.\n\nAt Wed, 11 Dec 2019 12:39:20 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in \n> The following bug has been logged on the website:\n> \n> Bug reference: 16159\n> Logged by: Pavel Suderevsky\n> Email address: psuderevsky@gmail.com\n> PostgreSQL version: 11.6\n> Operating system: CentOS 7.6.1810 (3.10.0-957.el7.x86_64)\n> Description: \n> \n> Reproduced 11.2,11.6.\n> \n> If PostgreSQL starts recovery and finds a history file for a timeline that\n> is higher than current one, it will request file with the segment for the\n> future timeline (that most likely doesn't exist yet) and only then it will\n> request file with the segment for current timeline.\n\nThe cause of the \"future\" timeline is that the standby has received\nthe history file for TLI=22 but has not completed replaying the first\ncheckpoint after promotion. In that case, WAL files before the\ntimeline switch should not exist for TLI=22 and PostgreSQL is making\nsure that by peeking the archive for the file.\n\nSince standby always starts archive recovery from the REDO location of\nthe last checkpoint performed on the standby(or the restart point),\nthe WAL amount to read is irrelevant to promotion.\n\n> If archive is located on remote storage it can take huge time to find that\n> segments for the future timelines are not exist yet and therefore recovery\n> can take too long.\n\nI don't think that peeking non-existent remote files takes comparable\namount of time to 16MB transfer. If the problem is the amount of WAL\nfiles to transfer during recovery, I came up of three ways to make\nstandby startup faster.\n\n1. For operational shutdown/restarts, make sure that the latest\n restart point is close enough to the replay location on the standby\n before shutting down. If not, manual checkpoint on the master then\n that on the standby would help. The functions pg_control_checkpoint()\n and pg_last_wal_replay_lsn() would work for checking that condition.\n\n2. PostgreSQL 11 accepts \"always\" for the archive_mode GUC setting. It\n enables standby-side archiving.\n\nhttps://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE\n\n3. Decrease max_wal_size or checkopint_timeout on the master, and/or\n decrease checkpoint_timeout on the standby. This decreases the\n amount of time needed during recovery.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 28 Jan 2020 19:13:32 +0300",
"msg_from": "Pavel Suderevsky <psuderevsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16159: recovery requests WALs for the next timelines before\n timeline switch LSN has been reached"
},
{
"msg_contents": "Hello, I added (moved to) -hackers.\n\nAt Tue, 28 Jan 2020 19:13:32 +0300, Pavel Suderevsky <psuderevsky@gmail.com> wrote in \n> But for me it still seems that PostgreSQL has enough information to check\n> that no WALs exist for the new timeline to omit searching all the\n> possibly-existing WALs.\n> \n> It can just look through the first received new-timeline's WAL and ensure\n> timeline switch occured in this WAL. Finally, it can check archive for the\n> only one possibly-existing previous WAL.\n\nRight. The timeline history file tells where a timeline ends.\n\n> Regading influence: issue is not about the large amount of WALs to apply\n> but in searching for the non-existing WALs on the remote storage, each such\n> search can take 5-10 seconds while obtaining existing WAL takes\n> milliseconds.\n\nWow. I didn't know of a file system that takes that much seconds to\ntrying non-existent files. Although I still think this is not a bug,\nbut avoiding that actually leads to a big win on such systems.\n\nAfter a thought, I think it's safe and effectively doable to let\nXLogFileReadAnyTLI() refrain from trying WAL segments of too-high\nTLIs. Some garbage archive files out of the range of a timeline might\nbe seen, for example, after reusing archive directory without clearing\nfiles. However, fetching such garbages just to fail doesn't\ncontribute durability or reliablity at all, I think.\n\nThe attached does that. \n\nAny thoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 29 Jan 2020 12:02:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Don't try fetching future segment of a TLI."
},
{
"msg_contents": "On 1/28/20 8:02 PM, Kyotaro Horiguchi wrote:\n > At Tue, 28 Jan 2020 19:13:32 +0300, Pavel Suderevsky\n >> Regading influence: issue is not about the large amount of WALs to apply\n >> but in searching for the non-existing WALs on the remote storage, \neach such\n >> search can take 5-10 seconds while obtaining existing WAL takes\n >> milliseconds.\n >\n > Wow. I didn't know of a file system that takes that much seconds to\n > trying non-existent files. Although I still think this is not a bug,\n > but avoiding that actually leads to a big win on such systems.\n\nI have not tested this case but I can imagine it would be slow in \npractice. It's axiomatic that is hard to prove a negative. With \nmulti-region replication it might well take some time to be sure that \nthe file *really* doesn't exist and hasn't just been lost in a single \nregion.\n\n > After a thought, I think it's safe and effectively doable to let\n > XLogFileReadAnyTLI() refrain from trying WAL segments of too-high\n > TLIs. Some garbage archive files out of the range of a timeline might\n > be seen, for example, after reusing archive directory without clearing\n > files. However, fetching such garbages just to fail doesn't\n > contribute durability or reliablity at all, I think.\n\nThe patch seems sane, the trick will be testing it.\n\nPavel, do you have an environment where you can ensure this is a \nperformance benefit?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 31 Jan 2020 21:31:40 -0800",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "Hi,\n\nI've tested patch provided by Kyotaro and do confirm it fixes the issue.\nAny chance it will be merged to one of the next minor releases?\n\nThank you very much!\n\nсб, 1 февр. 2020 г. в 08:31, David Steele <david@pgmasters.net>:\n\n> On 1/28/20 8:02 PM, Kyotaro Horiguchi wrote:\n> > At Tue, 28 Jan 2020 19:13:32 +0300, Pavel Suderevsky\n> >> Regading influence: issue is not about the large amount of WALs to\n> apply\n> >> but in searching for the non-existing WALs on the remote storage,\n> each such\n> >> search can take 5-10 seconds while obtaining existing WAL takes\n> >> milliseconds.\n> >\n> > Wow. I didn't know of a file system that takes that much seconds to\n> > trying non-existent files. Although I still think this is not a bug,\n> > but avoiding that actually leads to a big win on such systems.\n>\n> I have not tested this case but I can imagine it would be slow in\n> practice. It's axiomatic that is hard to prove a negative. With\n> multi-region replication it might well take some time to be sure that\n> the file *really* doesn't exist and hasn't just been lost in a single\n> region.\n>\n> > After a thought, I think it's safe and effectively doable to let\n> > XLogFileReadAnyTLI() refrain from trying WAL segments of too-high\n> > TLIs. Some garbage archive files out of the range of a timeline might\n> > be seen, for example, after reusing archive directory without clearing\n> > files. However, fetching such garbages just to fail doesn't\n> > contribute durability or reliablity at all, I think.\n>\n> The patch seems sane, the trick will be testing it.\n>\n> Pavel, do you have an environment where you can ensure this is a\n> performance benefit?\n>\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n\nHi,I've tested patch provided by Kyotaro and do confirm it fixes the issue.Any chance it will be merged to one of the next minor releases?Thank you very much!сб, 1 февр. 2020 г. в 08:31, David Steele <david@pgmasters.net>:On 1/28/20 8:02 PM, Kyotaro Horiguchi wrote:\n > At Tue, 28 Jan 2020 19:13:32 +0300, Pavel Suderevsky\n >> Regading influence: issue is not about the large amount of WALs to apply\n >> but in searching for the non-existing WALs on the remote storage, \neach such\n >> search can take 5-10 seconds while obtaining existing WAL takes\n >> milliseconds.\n >\n > Wow. I didn't know of a file system that takes that much seconds to\n > trying non-existent files. Although I still think this is not a bug,\n > but avoiding that actually leads to a big win on such systems.\n\nI have not tested this case but I can imagine it would be slow in \npractice. It's axiomatic that is hard to prove a negative. With \nmulti-region replication it might well take some time to be sure that \nthe file *really* doesn't exist and hasn't just been lost in a single \nregion.\n\n > After a thought, I think it's safe and effectively doable to let\n > XLogFileReadAnyTLI() refrain from trying WAL segments of too-high\n > TLIs. Some garbage archive files out of the range of a timeline might\n > be seen, for example, after reusing archive directory without clearing\n > files. However, fetching such garbages just to fail doesn't\n > contribute durability or reliablity at all, I think.\n\nThe patch seems sane, the trick will be testing it.\n\nPavel, do you have an environment where you can ensure this is a \nperformance benefit?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net",
"msg_date": "Thu, 19 Mar 2020 16:22:16 +0300",
"msg_from": "Pavel Suderevsky <psuderevsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "On 2020/03/19 22:22, Pavel Suderevsky wrote:\n> Hi,\n> \n> I've tested patch provided by Kyotaro and do confirm it fixes the issue.\n\nThe patch looks good to me. Attached is the updated version of the patch.\nI updated only comments.\n\nBarring any objection, I will commit this patch.\n\n> Any chance it will be merged to one of the next minor releases?\n\nThis doesn't seem a bug, so I'm thinking to merge this to next *major*\nversion release, i.e., v13.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 7 Apr 2020 02:43:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "On 4/6/20 1:43 PM, Fujii Masao wrote:\n> \n> \n> On 2020/03/19 22:22, Pavel Suderevsky wrote:\n>> Hi,\n>>\n>> I've tested patch provided by Kyotaro and do confirm it fixes the issue.\n> \n> The patch looks good to me. Attached is the updated version of the patch.\n> I updated only comments.\n> \n> Barring any objection, I will commit this patch.\n\nThe patch looks good to me.\n\n>> Any chance it will be merged to one of the next minor releases?\n> \n> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n> version release, i.e., v13.\n\nNot a bug, perhaps, but I think we do consider back-patching performance \nproblems. The rise in S3 usage has just exposed how poorly this \nperformed code in high-latency environments.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 6 Apr 2020 15:04:07 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "Thank you for picking this up.\n\nAt Tue, 7 Apr 2020 02:43:02 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2020/03/19 22:22, Pavel Suderevsky wrote:\n> > Hi,\n> > I've tested patch provided by Kyotaro and do confirm it fixes the\n> > issue.\n> \n> The patch looks good to me. Attached is the updated version of the\n> patch.\n> I updated only comments.\n\n+\t\t\t * The logfile segment that doesn't belong to the timeline is\n+\t\t\t * older or newer than the segment that the timeline started or\n+\t\t\t * eneded at, respectively. It's sufficient to check only the\n\ns/eneded/ended/ ?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Apr 2020 10:29:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "\n\nOn 2020/04/07 4:04, David Steele wrote:\n> On 4/6/20 1:43 PM, Fujii Masao wrote:\n>>\n>>\n>> On 2020/03/19 22:22, Pavel Suderevsky wrote:\n>>> Hi,\n>>>\n>>> I've tested patch provided by Kyotaro and do confirm it fixes the issue.\n>>\n>> The patch looks good to me. Attached is the updated version of the patch.\n>> I updated only comments.\n>>\n>> Barring any objection, I will commit this patch.\n> \n> The patch looks good to me.\n> \n>>> Any chance it will be merged to one of the next minor releases?\n>>\n>> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n>> version release, i.e., v13.\n> \n> Not a bug, perhaps, but I think we do consider back-patching performance problems. The rise in S3 usage has just exposed how poorly this performed code in high-latency environments.\n\nI understood the situation and am fine to back-patch that. But I'm not sure\nif it's fair to do that. Maybe we need to hear more opinions about this?\nOTOH, feature freeze for v13 is today, so what about committing the patch\nin v13 at first, and then doing the back-patch after hearing opinions and\nreceiving many +1?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 7 Apr 2020 12:15:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "On 2020/04/07 10:29, Kyotaro Horiguchi wrote:\n> Thank you for picking this up.\n> \n> At Tue, 7 Apr 2020 02:43:02 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> On 2020/03/19 22:22, Pavel Suderevsky wrote:\n>>> Hi,\n>>> I've tested patch provided by Kyotaro and do confirm it fixes the\n>>> issue.\n>>\n>> The patch looks good to me. Attached is the updated version of the\n>> patch.\n>> I updated only comments.\n> \n> +\t\t\t * The logfile segment that doesn't belong to the timeline is\n> +\t\t\t * older or newer than the segment that the timeline started or\n> +\t\t\t * eneded at, respectively. It's sufficient to check only the\n> \n> s/eneded/ended/ ?\n\nYes! Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 7 Apr 2020 12:22:20 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "At Tue, 7 Apr 2020 12:15:00 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/04/07 4:04, David Steele wrote:\n> > On 4/6/20 1:43 PM, Fujii Masao wrote:\n> >>\n> >>\n> >> On 2020/03/19 22:22, Pavel Suderevsky wrote:\n> >>> Hi,\n> >>>\n> >>> I've tested patch provided by Kyotaro and do confirm it fixes the\n> >>> issue.\n> >>\n> >> The patch looks good to me. Attached is the updated version of the\n> >> patch.\n> >> I updated only comments.\n> >>\n> >> Barring any objection, I will commit this patch.\n> > The patch looks good to me.\n> > \n> >>> Any chance it will be merged to one of the next minor releases?\n> >>\n> >> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n> >> version release, i.e., v13.\n> > Not a bug, perhaps, but I think we do consider back-patching\n> > performance problems. The rise in S3 usage has just exposed how poorly\n> > this performed code in high-latency environments.\n> \n> I understood the situation and am fine to back-patch that. But I'm not\n> sure\n> if it's fair to do that. Maybe we need to hear more opinions about\n> this?\n> OTOH, feature freeze for v13 is today, so what about committing the\n> patch\n> in v13 at first, and then doing the back-patch after hearing opinions\n> and\n> receiving many +1?\n\n+1 for commit only v13 today, then back-patch if people wants and/or\naccepts.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Apr 2020 16:48:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "On Tue, Apr 07, 2020 at 12:15:00PM +0900, Fujii Masao wrote:\n> I understood the situation and am fine to back-patch that. But I'm not sure\n> if it's fair to do that. Maybe we need to hear more opinions about this?\n> OTOH, feature freeze for v13 is today, so what about committing the patch\n> in v13 at first, and then doing the back-patch after hearing opinions and\n> receiving many +1?\n\nI have not looked at the patch so I cannot say much about it, but it\nis annoying to fetch segments you are not going to need anyway if you\ntarget recovery with a timeline older than the segments fetched and\nthis has a cost when you pay for the bandwidth of your environment\nwith only one archive location. So a backpatch sounds like a good\nthing to do even if recovery is not broken per-se, only slower.\n\nDesigning a TAP test for that is tricky, but you could look at the\nlogs of the backend to make sure that only the wanted segments are\nfetched with a central archived solution and multiple timelines\ninvolved. And costly it is.\n--\nMichael",
"msg_date": "Tue, 7 Apr 2020 17:17:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "I`ve bumped into this issue recently:\nhttps://www.postgresql.org/message-id/dd6690b0-ec03-6b3c-6fac-c963f91f87a7%40postgrespro.ru\n\n\nOn 4/6/20 8:43 PM, Fujii Masao wrote:\n\n> The patch looks good to me. Attached is the updated version of the patch.\n> I updated only comments.\n>\n> Barring any objection, I will commit this patch. \n\nI`ve been running tests on your patch. So far so good.\n\nOn Tue, Apr 07, 2020 at 12:15:00PM +0900, Fujii Masao wrote:\n\n> I understood the situation and am fine to back-patch that. But I'm not sure\n> if it's fair to do that. Maybe we need to hear more opinions about this?\n> OTOH, feature freeze for v13 is today, so what about committing the patch\n> in v13 at first, and then doing the back-patch after hearing opinions and\n> receiving many +1?\n\n+1 to back-patching it.\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nI`ve bumped into this issue recently:\nhttps://www.postgresql.org/message-id/dd6690b0-ec03-6b3c-6fac-c963f91f87a7%40postgrespro.ru\n\n On 4/6/20 8:43 PM, Fujii Masao wrote:\n\nThe patch looks good to me. Attached is\n the updated version of the patch.\n \n I updated only comments.\n \n\n Barring any objection, I will commit this patch.\n \n\n I`ve been running tests on your patch. So far so good.\n\nOn Tue, Apr 07, 2020 at 12:15:00PM +0900, Fujii Masao wrote:\n\n\nI understood the situation and am fine to back-patch that. But I'm not sure\nif it's fair to do that. Maybe we need to hear more opinions about this?\nOTOH, feature freeze for v13 is today, so what about committing the patch\nin v13 at first, and then doing the back-patch after hearing opinions and\nreceiving many +1?\n\n\n+1 to back-patching it.\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 7 Apr 2020 12:36:18 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "\nOn 4/7/20 3:48 AM, Kyotaro Horiguchi wrote:\n> At Tue, 7 Apr 2020 12:15:00 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n>>>> version release, i.e., v13.\n>>> Not a bug, perhaps, but I think we do consider back-patching\n>>> performance problems. The rise in S3 usage has just exposed how poorly\n>>> this performed code in high-latency environments.\n>>\n>> I understood the situation and am fine to back-patch that. But I'm not\n>> sure\n>> if it's fair to do that. Maybe we need to hear more opinions about\n>> this?\n>> OTOH, feature freeze for v13 is today, so what about committing the\n>> patch\n>> in v13 at first, and then doing the back-patch after hearing opinions\n>> and\n>> receiving many +1?\n> \n> +1 for commit only v13 today, then back-patch if people wants and/or\n> accepts.\n\nDefinitely +1 for a commit today to v13. I certainly was not trying to \nhold that up.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 7 Apr 2020 07:21:58 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "\n\nOn 2020/04/07 20:21, David Steele wrote:\n> \n> On 4/7/20 3:48 AM, Kyotaro Horiguchi wrote:\n>> At Tue, 7 Apr 2020 12:15:00 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>>> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n>>>>> version release, i.e., v13.\n>>>> Not a bug, perhaps, but I think we do consider back-patching\n>>>> performance problems. The rise in S3 usage has just exposed how poorly\n>>>> this performed code in high-latency environments.\n>>>\n>>> I understood the situation and am fine to back-patch that. But I'm not\n>>> sure\n>>> if it's fair to do that. Maybe we need to hear more opinions about\n>>> this?\n>>> OTOH, feature freeze for v13 is today, so what about committing the\n>>> patch\n>>> in v13 at first, and then doing the back-patch after hearing opinions\n>>> and\n>>> receiving many +1?\n>>\n>> +1 for commit only v13 today, then back-patch if people wants and/or\n>> accepts.\n> \n> Definitely +1 for a commit today to v13. I certainly was not trying to hold that up\nPushed the patch to v13, at first!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 8 Apr 2020 01:49:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Don't try fetching future segment of a TLI."
},
{
"msg_contents": "\n\nOn 2020/04/08 1:49, Fujii Masao wrote:\n> \n> \n> On 2020/04/07 20:21, David Steele wrote:\n>>\n>> On 4/7/20 3:48 AM, Kyotaro Horiguchi wrote:\n>>> At Tue, 7 Apr 2020 12:15:00 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>>>> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n>>>>>> version release, i.e., v13.\n>>>>> Not a bug, perhaps, but I think we do consider back-patching\n>>>>> performance problems. The rise in S3 usage has just exposed how poorly\n>>>>> this performed code in high-latency environments.\n>>>>\n>>>> I understood the situation and am fine to back-patch that. But I'm not\n>>>> sure\n>>>> if it's fair to do that. Maybe we need to hear more opinions about\n>>>> this?\n>>>> OTOH, feature freeze for v13 is today, so what about committing the\n>>>> patch\n>>>> in v13 at first, and then doing the back-patch after hearing opinions\n>>>> and\n>>>> receiving many +1?\n>>>\n>>> +1 for commit only v13 today, then back-patch if people wants and/or\n>>> accepts.\n\nPlease let me revisit this. Currently Grigory Smolkin, David Steele,\nMichael Paquier and Pavel Suderevsky agree to the back-patch and\nthere has been no objection to that. So we should do the back-patch?\nOr does anyone object to that?\n\nI don't think that this is a feature bug because archive recovery works\nfine from a functional perspective without this commit. OTOH,\nI understand that, without the commit, there is complaint about that\narchive recovery may be slow unnecessarily when archival storage is\nlocated in remote, e.g., Amazon S3 and it takes a long time to fetch\nthe non-existent archive WAL file. So I'm ok to the back-patch unless\nthere is no strong objection to that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 30 Apr 2020 23:15:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Back-patch is necessary? Re: Don't try fetching future segment of a\n TLI."
},
{
"msg_contents": "On Thu, Apr 30, 2020 at 7:46 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/04/08 1:49, Fujii Masao wrote:\n> >\n> >\n> > On 2020/04/07 20:21, David Steele wrote:\n> >>\n> >> On 4/7/20 3:48 AM, Kyotaro Horiguchi wrote:\n> >>> At Tue, 7 Apr 2020 12:15:00 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >>>>>> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n> >>>>>> version release, i.e., v13.\n> >>>>> Not a bug, perhaps, but I think we do consider back-patching\n> >>>>> performance problems. The rise in S3 usage has just exposed how poorly\n> >>>>> this performed code in high-latency environments.\n> >>>>\n> >>>> I understood the situation and am fine to back-patch that. But I'm not\n> >>>> sure\n> >>>> if it's fair to do that. Maybe we need to hear more opinions about\n> >>>> this?\n> >>>> OTOH, feature freeze for v13 is today, so what about committing the\n> >>>> patch\n> >>>> in v13 at first, and then doing the back-patch after hearing opinions\n> >>>> and\n> >>>> receiving many +1?\n> >>>\n> >>> +1 for commit only v13 today, then back-patch if people wants and/or\n> >>> accepts.\n>\n> Please let me revisit this. Currently Grigory Smolkin, David Steele,\n> Michael Paquier and Pavel Suderevsky agree to the back-patch and\n> there has been no objection to that. So we should do the back-patch?\n> Or does anyone object to that?\n>\n> I don't think that this is a feature bug because archive recovery works\n> fine from a functional perspective without this commit. OTOH,\n> I understand that, without the commit, there is complaint about that\n> archive recovery may be slow unnecessarily when archival storage is\n> located in remote, e.g., Amazon S3 and it takes a long time to fetch\n> the non-existent archive WAL file. So I'm ok to the back-patch unless\n> there is no strong objection to that.\n>\n\nI don't see any obvious problem with the changed code but we normally\ndon't backpatch performance improvements. I can see that the code\nchange here appears to be straight forward so it might be fine to\nbackpatch this. Have we seen similar reports earlier as well? AFAIK,\nthis functionality is for a long time and if people were facing this\non a regular basis then we would have seen such reports multiple\ntimes. I mean to say if the chances of this hitting are less then we\ncan even choose not to backpatch this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 2 May 2020 17:10:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Back-patch is necessary? Re: Don't try fetching future segment of\n a TLI."
},
{
"msg_contents": "\n\nOn 2020/05/02 20:40, Amit Kapila wrote:\n> On Thu, Apr 30, 2020 at 7:46 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/04/08 1:49, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/04/07 20:21, David Steele wrote:\n>>>>\n>>>> On 4/7/20 3:48 AM, Kyotaro Horiguchi wrote:\n>>>>> At Tue, 7 Apr 2020 12:15:00 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>>>>>> This doesn't seem a bug, so I'm thinking to merge this to next *major*\n>>>>>>>> version release, i.e., v13.\n>>>>>>> Not a bug, perhaps, but I think we do consider back-patching\n>>>>>>> performance problems. The rise in S3 usage has just exposed how poorly\n>>>>>>> this performed code in high-latency environments.\n>>>>>>\n>>>>>> I understood the situation and am fine to back-patch that. But I'm not\n>>>>>> sure\n>>>>>> if it's fair to do that. Maybe we need to hear more opinions about\n>>>>>> this?\n>>>>>> OTOH, feature freeze for v13 is today, so what about committing the\n>>>>>> patch\n>>>>>> in v13 at first, and then doing the back-patch after hearing opinions\n>>>>>> and\n>>>>>> receiving many +1?\n>>>>>\n>>>>> +1 for commit only v13 today, then back-patch if people wants and/or\n>>>>> accepts.\n>>\n>> Please let me revisit this. Currently Grigory Smolkin, David Steele,\n>> Michael Paquier and Pavel Suderevsky agree to the back-patch and\n>> there has been no objection to that. So we should do the back-patch?\n>> Or does anyone object to that?\n>>\n>> I don't think that this is a feature bug because archive recovery works\n>> fine from a functional perspective without this commit. OTOH,\n>> I understand that, without the commit, there is complaint about that\n>> archive recovery may be slow unnecessarily when archival storage is\n>> located in remote, e.g., Amazon S3 and it takes a long time to fetch\n>> the non-existent archive WAL file. So I'm ok to the back-patch unless\n>> there is no strong objection to that.\n>>\n> \n> I don't see any obvious problem with the changed code but we normally\n> don't backpatch performance improvements. I can see that the code\n> change here appears to be straight forward so it might be fine to\n> backpatch this. Have we seen similar reports earlier as well? AFAIK,\n> this functionality is for a long time and if people were facing this\n> on a regular basis then we would have seen such reports multiple\n> times. I mean to say if the chances of this hitting are less then we\n> can even choose not to backpatch this.\n\nI found the following two reports. ISTM there are not so many reports...\nhttps://www.postgresql.org/message-id/16159-f5a34a3a04dc67e0@postgresql.org\nhttps://www.postgresql.org/message-id/dd6690b0-ec03-6b3c-6fac-c963f91f87a7%40postgrespro.ru\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 7 May 2020 15:43:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Back-patch is necessary? Re: Don't try fetching future segment of\n a TLI."
},
{
"msg_contents": "On Thu, May 7, 2020 at 12:13 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/05/02 20:40, Amit Kapila wrote:\n> >\n> > I don't see any obvious problem with the changed code but we normally\n> > don't backpatch performance improvements. I can see that the code\n> > change here appears to be straight forward so it might be fine to\n> > backpatch this. Have we seen similar reports earlier as well? AFAIK,\n> > this functionality is for a long time and if people were facing this\n> > on a regular basis then we would have seen such reports multiple\n> > times. I mean to say if the chances of this hitting are less then we\n> > can even choose not to backpatch this.\n>\n> I found the following two reports. ISTM there are not so many reports...\n> https://www.postgresql.org/message-id/16159-f5a34a3a04dc67e0@postgresql.org\n> https://www.postgresql.org/message-id/dd6690b0-ec03-6b3c-6fac-c963f91f87a7%40postgrespro.ru\n>\n\nThe first seems to be the same where this bug has been fixed. It has\nbeen moved to hackers in email [1]. Am, I missing something?\nConsidering it has been encountered by two different people, I think\nit would not be a bad idea to back-patch this.\n\n[1] - https://www.postgresql.org/message-id/20200129.120222.1476610231001551715.horikyota.ntt%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 May 2020 14:27:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Back-patch is necessary? Re: Don't try fetching future segment of\n a TLI."
},
{
"msg_contents": "\n\nOn 2020/05/07 17:57, Amit Kapila wrote:\n> On Thu, May 7, 2020 at 12:13 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/05/02 20:40, Amit Kapila wrote:\n>>>\n>>> I don't see any obvious problem with the changed code but we normally\n>>> don't backpatch performance improvements. I can see that the code\n>>> change here appears to be straight forward so it might be fine to\n>>> backpatch this. Have we seen similar reports earlier as well? AFAIK,\n>>> this functionality is for a long time and if people were facing this\n>>> on a regular basis then we would have seen such reports multiple\n>>> times. I mean to say if the chances of this hitting are less then we\n>>> can even choose not to backpatch this.\n>>\n>> I found the following two reports. ISTM there are not so many reports...\n>> https://www.postgresql.org/message-id/16159-f5a34a3a04dc67e0@postgresql.org\n>> https://www.postgresql.org/message-id/dd6690b0-ec03-6b3c-6fac-c963f91f87a7%40postgrespro.ru\n>>\n> \n> The first seems to be the same where this bug has been fixed. It has\n> been moved to hackers in email [1].\n\nYes, that's the original report that leaded to the commit.\n\n> Am, I missing something?\n> Considering it has been encountered by two different people, I think\n> it would not be a bad idea to back-patch this.\n\n+1 So I will do the back-patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 8 May 2020 14:23:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Back-patch is necessary? Re: Don't try fetching future segment of\n a TLI."
},
{
"msg_contents": "\n\nOn 2020/05/08 14:23, Fujii Masao wrote:\n> \n> \n> On 2020/05/07 17:57, Amit Kapila wrote:\n>> On Thu, May 7, 2020 at 12:13 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>> On 2020/05/02 20:40, Amit Kapila wrote:\n>>>>\n>>>> I don't see any obvious problem with the changed code but we normally\n>>>> don't backpatch performance improvements. I can see that the code\n>>>> change here appears to be straight forward so it might be fine to\n>>>> backpatch this. Have we seen similar reports earlier as well? AFAIK,\n>>>> this functionality is for a long time and if people were facing this\n>>>> on a regular basis then we would have seen such reports multiple\n>>>> times. I mean to say if the chances of this hitting are less then we\n>>>> can even choose not to backpatch this.\n>>>\n>>> I found the following two reports. ISTM there are not so many reports...\n>>> https://www.postgresql.org/message-id/16159-f5a34a3a04dc67e0@postgresql.org\n>>> https://www.postgresql.org/message-id/dd6690b0-ec03-6b3c-6fac-c963f91f87a7%40postgrespro.ru\n>>>\n>>\n>> The first seems to be the same where this bug has been fixed. It has\n>> been moved to hackers in email [1].\n> \n> Yes, that's the original report that leaded to the commit.\n> \n>> Am, I missing something?\n>> Considering it has been encountered by two different people, I think\n>> it would not be a bad idea to back-patch this.\n> \n> +1 So I will do the back-patch.\n\nDone. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 9 May 2020 12:26:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Back-patch is necessary? Re: Don't try fetching future segment of\n a TLI."
}
] |
[
{
"msg_contents": "During NestLoop execution we have bad corner case: if outer subtree \ncontains tuples the join node will scan inner subtree even if it does \nnot return any tuples.\n\nTo reproduce the problem see 'problem.sql' in attachment:\nOut of explain analyze see in 'problem_explain.txt'\n\nAs you can see, executor scan each of 1e5 outer tuples despite the fact \nthat inner can't return any tuples.\n\nTeodor Sigaev and I developed a patch to solve this problem. Result of \nexplain analyze procedure can be found in the 'optimized_execution.txt'.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 11 Dec 2019 18:08:40 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Optimization of NestLoop join in the case of guaranteed empty inner\n subtree"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> During NestLoop execution we have bad corner case: if outer subtree \n> contains tuples the join node will scan inner subtree even if it does \n> not return any tuples.\n\nSo the first question about corner-case optimizations like this is always\n\"how much overhead does it add in the normal case where it fails to gain\nanything?\". I see no performance numbers in your proposal.\n\nI do not much like anything about the code, either: as written it's\nonly helpful for an especially narrow corner case (so narrow that\nI wonder if it really ever helps at all: surely calling a nodeMaterial\nwhose tuplestore is empty doesn't cost much). But that doesn't stop it\nfrom adding a bool to the generic PlanState struct, with global\nimplications. What I'd expected from your text description is that\nnodeNestLoop would remember whether its inner child had returned zero rows\nthe first time, and assume that subsequent executions could be skipped\nunless the inner child's parameters change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Dec 2019 10:49:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of NestLoop join in the case of guaranteed empty\n inner subtree"
},
{
"msg_contents": "On 12/11/19 8:49 PM, Tom Lane wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>> During NestLoop execution we have bad corner case: if outer subtree\n>> contains tuples the join node will scan inner subtree even if it does\n>> not return any tuples.\n> \n> So the first question about corner-case optimizations like this is always\n> \"how much overhead does it add in the normal case where it fails to gain\n> anything?\". I see no performance numbers in your proposal.\n\nI thought it is trivial. But quick study shows no differences that can \nbe seen.\n\n> \n> I do not much like anything about the code, either: as written it's\n> only helpful for an especially narrow corner case (so narrow that\n> I wonder if it really ever helps at all: surely calling a nodeMaterial\n> whose tuplestore is empty doesn't cost much).\n\nScanning of large outer can be very costly. If you will try to play with \nanalytical queries you can find cases, where nested loops uses \nmaterialization of zero tuples. At least one of the cases for this is \nfinding data gaps.\nAlso, this optimization exists in logic of hash join.\n\n> But that doesn't stop it\n> from adding a bool to the generic PlanState struct, with global\n> implications. What I'd expected from your text description is that\n> nodeNestLoop would remember whether its inner child had returned zero rows\n> the first time, and assume that subsequent executions could be skipped\n> unless the inner child's parameters change.\n\nThis note I was waiting for. I agree with you that adding a bool \nvariable to PlanState is excessful. See in attachment another version of \nthe optimization.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 15 Dec 2019 15:56:44 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Optimization of NestLoop join in the case of guaranteed empty\n inner subtree"
}
] |
[
{
"msg_contents": "There were a couple of recent threads that wanted to add an \n.editorconfig file but never actually ended up doing so.[0][1] Here is \na patch. It is meant to match more or less what's in .dir-locals.el.\n\nI have only tested this with the GitHub view, not with an actual editor.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/20180605172252.GA2433%40fetter.org\n[1]: \nhttps://www.postgresql.org/message-id/flat/20180529153617.c72llpzbvt46chr6%40alap3.anarazel.de\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 11 Dec 2019 17:00:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Add .editorconfig"
},
{
"msg_contents": "> On 11 Dec 2019, at 17:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> There were a couple of recent threads that wanted to add an .editorconfig file but never actually ended up doing so.[0][1] Here is a patch. It is meant to match more or less what's in .dir-locals.el.\n\n+[*.{c,h,l,y,pl,pm}]\n\nWhat about *.sh?\n\nNot that there is much of Python in the tree, but shouldn't those also be\ncovered if we do this?\n\n[*.py]\nindent_style = space\nindent_size = 4\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 11 Dec 2019 17:09:43 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add .editorconfig"
},
{
"msg_contents": "On 2019-12-11 17:09, Daniel Gustafsson wrote:\n>> On 11 Dec 2019, at 17:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> There were a couple of recent threads that wanted to add an .editorconfig file but never actually ended up doing so.[0][1] Here is a patch. It is meant to match more or less what's in .dir-locals.el.\n> \n> +[*.{c,h,l,y,pl,pm}]\n> \n> What about *.sh?\n> \n> Not that there is much of Python in the tree, but shouldn't those also be\n> covered if we do this?\n> \n> [*.py]\n> indent_style = space\n> indent_size = 4\n\nThose were not in the Emacs config either.\n\nI'd be inclined to leave the Python stuff alone, in particular. The \nPEP-8 style appears to be universally accepted as the default, so we \ndon't need to specify it here. We only need to specify what's different \nor where no sensible default exists.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Dec 2019 17:28:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add .editorconfig"
},
{
"msg_contents": "I have not used .editorconfig that much, but would it makes sense to add \nthe below?\n\n[*]\nend_of_line = lf\n\n-- \nAndreas Karlsson\n\n\n",
"msg_date": "Wed, 11 Dec 2019 18:54:05 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Add .editorconfig"
},
{
"msg_contents": "On 2019-12-11 18:54, Andreas Karlsson wrote:\n> I have not used .editorconfig that much, but would it makes sense to add\n> the below?\n> \n> [*]\n> end_of_line = lf\n\nI think that would best be done in response to an actual need.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Dec 2019 09:33:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add .editorconfig"
},
{
"msg_contents": "On 2019-12-11 17:00, Peter Eisentraut wrote:\n> There were a couple of recent threads that wanted to add an\n> .editorconfig file but never actually ended up doing so.[0][1] Here is\n> a patch. It is meant to match more or less what's in .dir-locals.el.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Dec 2019 09:14:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add .editorconfig"
}
] |
[
{
"msg_contents": "De: Stephen Frost\nEnviadas: Quarta-feira, 11 de Dezembro de 2019 15:34\n\n>I agree with not breaking things but that doesn't mean the only\n>reasonable approach is to do the absolute minimum- you might not be\n>breaking something today, but it's going to confuse people later on down\n>the road and may lead to bugs being introduced due to that confusion, or\n>at the very least will add to people's time to figure out what's really\n>going on.\nI don't know how such fixes could lead to more bugs.\nCurrently there is a risk of having bugs by mixing access to shadow variables with macros.\nI believe, that others has the same opinion.\nhttps://www.postgresql.org/message-id/CA%2BTgmoZsM04VCKn4n8dsXxg_s8drPUHUafshG%3DP0edVjUS3Gew%40mail.gmail.com\nand\nhttps://www.postgresql.org/message-id/20191209090329.GC72921%40paquier.xyz\n\n>I wasn't suggesting to change the names of the global variables in this\n>specific case, though I could see that being a better approach in some\n>instances- but it really depends. Each case needs to be reviewed and\n>considered and the best approach taken.\nAgain, depending on the case, whether the best approach is to promote structure creation, variable removal, and logic changes, for now, is really beyond my reach.\n\n>I think we need to be looking at the changes and considering them, and\n>the person proposing the changes should be doing that and not just\n>expecting everyone else to do so.\nAgain, I am considering only the range of my changes, which are minimal, so less likely to do something wrong, or hinder future development.\n\n>I'd suggest that we work through that then and get you up to speed on\n>the structures and logic- the pg_dump code is pretty ugly but the\n>specific usage of numTables isn't too bad. Each of these should be\n>looked at independently and thought about \"what's the right way to fix\n>this?\" The right way isn't necessairly to just rename the variables, as\n>I was saying, and doing so may lead to more confusion, not less.\nThis way it will take a long time to eliminate all name collisions.\nAnd worse, in my opinion, will continue to be adding new cases, since there is no rule, so check if this happens in the current development.\nNot only are they global, there are dozens, perhaps hundreds of shadow local variables.\nI was working on this second class of variables, which, in my opinion, would lead to less code, less bugs, and more security for the code, but I realize that my effort may not be worth it.\n\n>Having shadowed globals, while kinda ugly, doesn't necessairly mean it's\n>a bug. I'm not sure what \"minor performance improvements\" are being\n>claimed here but there's a whole lot of work involved in demonstrating\n>that a change is a performance improvement.\nI was referring to contributions like this:\nhttps://github.com/postgres/postgres/commit/91da65f4ac2837e0792071e42b2e2101059f1b1b\nand not specifically, performance improvements in this global unshadow patch.\n\n>but.. the changes you're proposing are making them inconsistent and\n>confusing when there isn't actually a difference between the global and\n>the local, it's just the somewhere along the way someone thought they\n>needed to pass in numTables when they really didn't, and we should go\n>fix *that*, not rename the variable to something else to make someone\n>later on go \"wait, why did we need to pass in this variable? how is\n>this different from the global?\"\nI'm confused here, but if you suggest removing the numTables variable is out of my reach.\nKeeping the same name as the variable, the collision will continue, and the purpose of enabling -Wshadow will never be done someday.\n\n>Perhaps I'm a bit confused, but it seems that you're the author of these\n>specific changes, and I'm trying to provide feedback as to how you can\n>improve what you're proposing in a way that will improve the code base\n>overall and reduce the confusion while also eliminating the shadow\n>variables. If the author of a patch isn't open to this kind of review\n>and willing to adjust the patch to improve it because they aren't sure\n>that their changes will be correct then they could at least post them\n>back here and ask, or better, go look at the code and get a better\n>understanding of what's going on to build confidence in the change.\nI am the author of the patch.\nI'm repeating myself, but come on, I don't have confidence in proposing logic-altering changes for now.\n\n>The goal here also shouldn't be \"we just want to turn on this alert, so\n>we're going to make changes to the source without thinking just to\n>appease the compiler\".\nOf course there is a difference in thinking here.\nThe changes I propose, in my opinion, are consistent, readable and break nothing.\n\n>I agree with fixing collisions, but not in a rote way like this.\nGlad you think alike about fixing collisions.\n\nregards,\nRanier Vilela\n\n\n\n\n\n\n\nDe: Stephen Frost\n\nEnviadas: Quarta-feira, 11 de Dezembro de 2019 15:34\n\n>I agree with not breaking things but that doesn't mean the only\n>reasonable approach is to do the absolute minimum- you might not be\n>breaking something today, but it's going to confuse people later on down\n>the road and may lead to bugs being introduced due to that confusion, or\n>at the very least will add to people's time to figure out what's really\n>going on.\nI don't know how such fixes could lead to more bugs.\n\nCurrently there is a risk of having bugs by mixing access to shadow variables with macros.\n\nI believe, that others has the same opinion.\nhttps://www.postgresql.org/message-id/CA%2BTgmoZsM04VCKn4n8dsXxg_s8drPUHUafshG%3DP0edVjUS3Gew%40mail.gmail.com\nand \nhttps://www.postgresql.org/message-id/20191209090329.GC72921%40paquier.xyz\n\n>I wasn't suggesting to change the names of the global variables in this\n>specific case, though I could see that being a better approach in some\n>instances- but it really depends. Each case needs to be reviewed and\n>considered and the best approach taken.\nAgain, depending on the case, whether the best approach is to promote structure creation, variable removal, and logic changes, for now, is really beyond my reach.\n\n>I think we need to be looking at the changes and considering them, and\n>the person proposing the changes should be doing that and not just\n>expecting everyone else to do so.\nAgain, I am considering only the range of my changes, which are minimal, so less likely to do something wrong, or hinder future development.\n\n>I'd suggest that we work through that then and get you up to speed on\n>the structures and logic- the pg_dump code is pretty ugly but the\n>specific usage of numTables isn't too bad. Each of these should be\n>looked at independently and thought about \"what's the right way to fix\n>this?\" The right way isn't necessairly to just rename the variables, as\n>I was saying, and doing so may lead to more confusion, not less.\nThis way it will take a long time to eliminate all name collisions.\n\nAnd worse, in my opinion, will continue to be adding new cases, since there is no rule, so check if this happens in the current development.\n\nNot only are they global, there are dozens, perhaps hundreds of shadow local variables.\n\nI was working on this second class of variables, which, in my opinion, would lead to less code, less bugs, and more security for the code, but I realize that my effort may not be worth it.\n\n>Having shadowed globals, while kinda ugly, doesn't necessairly mean it's\n>a bug. I'm not sure what \"minor performance improvements\" are being\n>claimed here but there's a whole lot of work involved in demonstrating\n>that a change is a performance improvement.\nI was referring to contributions like this:\nhttps://github.com/postgres/postgres/commit/91da65f4ac2837e0792071e42b2e2101059f1b1b\nand not specifically, performance improvements in this global unshadow patch.\n \n>but.. the changes you're proposing are making them inconsistent and\n>confusing when there isn't actually a difference between the global and\n>the local, it's just the somewhere along the way someone thought they\n>needed to pass in numTables when they really didn't, and we should go\n>fix *that*, not rename the variable to something else to make someone\n>later on go \"wait, why did we need to pass in this variable? how is\n>this different from the global?\"\nI'm confused here, but if you suggest removing the numTables variable is out of my reach.\nKeeping the same name as the variable, the collision will continue, and the purpose of enabling -Wshadow will never be done someday.\n\n>Perhaps I'm a bit confused, but it seems that you're the author of these\n>specific changes, and I'm trying to provide feedback as to how you can\n>improve what you're proposing in a way that will improve the code base\n>overall and reduce the confusion while also eliminating the shadow\n>variables. If the author of a patch isn't open to this kind of review\n>and willing to adjust the patch to improve it because they aren't sure\n>that their changes will be correct then they could at least post them\n>back here and ask, or better, go look at the code and get a better\n>understanding of what's going on to build confidence in the change.\nI am the author of the patch.\nI'm repeating myself, but come on, I don't have confidence in proposing logic-altering changes for now.\n\n>The goal here also shouldn't be \"we just want to turn on this alert, so\n>we're going to make changes to the source without thinking just to\n>appease the compiler\".\nOf course there is a difference in thinking here.\nThe changes I propose, in my opinion, are consistent, readable and break nothing.\n\n>I agree with fixing collisions, but not in a rote way like this.\nGlad you think alike about fixing collisions.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 11 Dec 2019 17:46:02 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 9:46 AM Ranier Vilela <ranier_gyn@hotmail.com>\nwrote:\n\n> I am the author of the patch.\n> I'm repeating myself, but come on, I don't have confidence in proposing\n> logic-altering changes for now.\n>\n>\nThen you need to stop and patch the holes and not just throw paint on the\nwall to cover things up.\n\nSpecific example\n\nsrc/bin/pgdump/common.c\n\nThat file discusses the purpose of numTables\n\n * These variables are static to avoid the notational cruft of having to\npass\n * them into findTableByOid() and friends.\n\nThen the file goes and doesn't follow that logic by passing numTables\naround to a bunch of functions within itself.\n\nThe fix here, very much appears to be to remove the spurious numTables in\nthe functions.\n\nHowever, if you cannot, or will not, take the opportunity to correct it\nproperly - as has been asked earlier for this specific file - then please\njust leave it alone.\n\nThere have been plenty of emails on these threads where folks have looked\nat your work and discussed whether or not specific things should be changed\nbased on your analysis - that's an amazing thing to see occur - but that's\ngetting overwhelmed by your inability to take a step back and stop just\nthrowing stuff on the wall.\n\nI've mentioned inconsistencies in your patches - that is a product of just\ntrying to throw something on the wall to cover over the issue - hiding a\nhole in the wall with something doesn't remove the hole in the wall.\n\nYou would be so much better off taking on one specific instance at a time\nand working with folks to learn how the code functions. If you don't think\nyou can handle the bigger issues - then stick with things like numTables\nand the clear issues within your grasp first.\n\nI truly do wish you all the best - but you do not seem to be approaching\nthese issues with the correct mindset at the moment. Volume is not the\nwinner over quality here.\n\nJohn W Higgins\n\n>\n\nOn Wed, Dec 11, 2019 at 9:46 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n\nI am the author of the patch.\nI'm repeating myself, but come on, I don't have confidence in proposing logic-altering changes for now.\nThen you need to stop and patch the holes and not just throw paint on the wall to cover things up.Specific examplesrc/bin/pgdump/common.cThat file discusses the purpose of numTables * These variables are static to avoid the notational cruft of having to pass * them into findTableByOid() and friends.Then the file goes and doesn't follow that logic by passing numTables around to a bunch of functions within itself.The fix here, very much appears to be to remove the spurious numTables in the functions.However, if you cannot, or will not, take the opportunity to correct it properly - as has been asked earlier for this specific file - then please just leave it alone.There have been plenty of emails on these threads where folks have looked at your work and discussed whether or not specific things should be changed based on your analysis - that's an amazing thing to see occur - but that's getting overwhelmed by your inability to take a step back and stop just throwing stuff on the wall.I've mentioned inconsistencies in your patches - that is a product of just trying to throw something on the wall to cover over the issue - hiding a hole in the wall with something doesn't remove the hole in the wall.You would be so much better off taking on one specific instance at a time and working with folks to learn how the code functions. If you don't think you can handle the bigger issues - then stick with things like numTables and the clear issues within your grasp first.I truly do wish you all the best - but you do not seem to be approaching these issues with the correct mindset at the moment. Volume is not the winner over quality here.John W Higgins",
"msg_date": "Wed, 11 Dec 2019 10:33:20 -0800",
"msg_from": "John W Higgins <wishdev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Greetings,\n\n* Ranier Vilela (ranier_gyn@hotmail.com) wrote:\n> De: Stephen Frost\n> Enviadas: Quarta-feira, 11 de Dezembro de 2019 15:34\n> \n> >I agree with not breaking things but that doesn't mean the only\n> >reasonable approach is to do the absolute minimum- you might not be\n> >breaking something today, but it's going to confuse people later on down\n> >the road and may lead to bugs being introduced due to that confusion, or\n> >at the very least will add to people's time to figure out what's really\n> >going on.\n> I don't know how such fixes could lead to more bugs.\n> Currently there is a risk of having bugs by mixing access to shadow variables with macros.\n\nI really don't have any doubts that it's going to lead to confusion,\nparticularly in a case like the numTables vs. nTables thing you're\nproposing in the one case that I spent some time looking at, and that\nconfusion certainly could lead to bugs. Sure, having shadow variables\nalso could- but you haven't identified an actual bug there today, so why\nnot just fix it in a way that eliminates the confusion here?\n\nHere's an example of my concern- we change the name of the numTables\nvariable in these pg_dump functions to nTables as you propose... And\nthen later on someone starts hacking on these functions and they know\nabout the global and they start using it, so now we've got two\nvariables, both able to be used in the same scope, but one of them is a\nglobal and the other is a local. As long as both are always the same,\nsure everything works- but what happens if, for whatever reason, someone\nuses the function in a new way and passes in a different value as an\nargument, one that doesn't match what the global has? Well, some of the\ncode will use the argument, and some of the code won't. At least today,\nthere's no chance that the global variable will be used inside that\nfunction- it's *always* going to use the argument passed in.\n\nI don't think that's even that far-fetched of a possibility considering\nmost of the code is using the global variable directly and these\nfunctions are really the odd ones where numTables is being passed in as\nan argument, so ending up with a mix in the function looks rather likely\nto happen, and a bug resulting from that inconsistency entirely\npossible.\n\nIt's also possbile that the changes you're proposing might themselves\ninduce bugs- by keeping the variable and just renaming it, you had\nbetter be ABSOLUTELY sure you rename every case because, if you don't,\neverything will still work just *fine*, except where you missed a case,\nthe code will reference the global and the compiler won't complain and\nit might very well look like everything is working.\n\nEither way, in my view, you had better review the code, have an\nunderstanding of how it works, and make sure that the change you're\nmaking is correct and makes sense, and that you've tested it well.\n\n> >I wasn't suggesting to change the names of the global variables in this\n> >specific case, though I could see that being a better approach in some\n> >instances- but it really depends. Each case needs to be reviewed and\n> >considered and the best approach taken.\n> Again, depending on the case, whether the best approach is to promote structure creation, variable removal, and logic changes, for now, is really beyond my reach.\n\nThen I'd suggest that you spend some time looking at each case and\nworking through what the best approach is and proposing patches that use\nthe best approach in each case. If you don't wish to spend time on\nthat, that's fine, but I don't agree with this approach of just pushing\nthrough and making things changes just to satisfy a particular compiler\nwarning. I don't anticipate further discussion on this changing my mind\non this point.\n\n> >I think we need to be looking at the changes and considering them, and\n> >the person proposing the changes should be doing that and not just\n> >expecting everyone else to do so.\n> Again, I am considering only the range of my changes, which are minimal, so less likely to do something wrong, or hinder future development.\n\nI've already pointed out why I don't think this is the right approach to\nbe addressing these issues, and it seems that you don't disagree with me\nabout the recommended changes I've suggested, you've just said that you\nonly want to think about or care about renaming of variables and I am\nspecifically saying that's not an acceptable approach to addressing\nthese issues.\n\n> >I'd suggest that we work through that then and get you up to speed on\n> >the structures and logic- the pg_dump code is pretty ugly but the\n> >specific usage of numTables isn't too bad. Each of these should be\n> >looked at independently and thought about \"what's the right way to fix\n> >this?\" The right way isn't necessairly to just rename the variables, as\n> >I was saying, and doing so may lead to more confusion, not less.\n> This way it will take a long time to eliminate all name collisions.\n\nWhy is that an issue?\n\n> And worse, in my opinion, will continue to be adding new cases, since there is no rule, so check if this happens in the current development.\n\nFeel free to monitor the situation and complain about new patches which\nare proposed that add to them. I don't have any particular problem with\nthat. Nor do I object to generally pushing forward with the goal of\neliminating the existing ones.\n\nLet me lay this out in a different way- we could do the exact same thing\nyou're doing here by just mindlessly changing, right before we commit,\nany variables that shadow global variables, we'd eliminate the compiler\nerror, but it doesn't directly make anything *better* by itself, and\nultimately isn't really all that different from the current situation\nwhere the compiler is essentially doing this for us by manging the\nvariables as shadowing the globals thanks to C scoping rules, except\nthat we add in the possibility of mixing usage of the local and the\nglobal throughout the functions therefore adding to the confusion.\n\n> Not only are they global, there are dozens, perhaps hundreds of shadow local variables.\n\nThat doesn't actually make any of them bugs though.\n\n> I was working on this second class of variables, which, in my opinion, would lead to less code, less bugs, and more security for the code, but I realize that my effort may not be worth it.\n\nI'm all for working to eliminate these shadow variables, but, again, not\nthrough rote renaming of the locals without putting in any real thought\nabout what the code is doing and working out what the right approach to\nsuch a change to eliminate the shadow variables should be.\n\n> >Having shadowed globals, while kinda ugly, doesn't necessairly mean it's\n> >a bug. I'm not sure what \"minor performance improvements\" are being\n> >claimed here but there's a whole lot of work involved in demonstrating\n> >that a change is a performance improvement.\n> I was referring to contributions like this:\n> https://github.com/postgres/postgres/commit/91da65f4ac2837e0792071e42b2e2101059f1b1b\n> and not specifically, performance improvements in this global unshadow patch.\n\nOk, so this specific \"global unshadow patch\" is all about bugs, but\nwithout actually showing that there's actual bugs here, just that there\nare shadowed variables... In which case, the real question is \"is this\nchange an improvement to the code\" and I'm arguing that just the act of\nchanging the variable names to avoid shadowing isn't necessairly a code\nimprovement- that has to be evaluated on a case-by-case basis. If\nyou're not going to do that evaluation then you're just throwing changes\nat the community with the expectation that someone else will do the\nanalysis and decide if the changes are worthwhile or not and that\nstrikes me as not really being very helpful. I'd really rather work\ntowards patches that are clear improvements which have been well\nconsidered by the proposer of the patch.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Dec 2019 13:57:31 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
}
] |
[
{
"msg_contents": "De: Stephen Frost\nEnviadas: Quarta-feira, 11 de Dezembro de 2019 18:57\n>I really don't have any doubts that it's going to lead to confusion,\n>particularly in a case like the numTables vs. nTables thing you're\n>proposing in the one case that I spent some time looking at, and that\n>confusion certainly could lead to bugs. Sure, having shadow variables\n>also could- but you haven't identified an actual bug there today, so why\n>not just fix it in a way that eliminates the confusion here?\nI'm starting to think you're absolutely right.\n\n>Here's an example of my concern- we change the name of the numTables\n>variable in these pg_dump functions to nTables as you propose... And\n>then later on someone starts hacking on these functions and they know\n>about the global and they start using it, so now we've got two\n>variables, both able to be used in the same scope, but one of them is a\n>global and the other is a local. As long as both are always the same,\n>sure everything works- but what happens if, for whatever reason, someone\n>uses the function in a new way and passes in a different value as an\n>argument, one that doesn't match what the global has? Well, some of the\n>code will use the argument, and some of the code won't. At least today,\n>there's no chance that the global variable will be used inside that\n>function- it's *always* going to use the argument passed in.\nUnderstood.\n\n>I don't think that's even that far-fetched of a possibility considering\n>most of the code is using the global variable directly and these\n>functions are really the odd ones where numTables is being passed in as\n>an argument, so ending up with a mix in the function looks rather likely\n>to happen, and a bug resulting from that inconsistency entirely\n>possible.\nYes.\n\n>It's also possbile that the changes you're proposing might themselves\n>induce bugs- by keeping the variable and just renaming it, you had\n>better be ABSOLUTELY sure you rename every case because, if you don't,\n>everything will still work just *fine*, except where you missed a case,\n>the code will reference the global and the compiler won't complain and\n>it might very well look like everything is working.\nI can tell you that I tried to take every precaution in that direction.\nBut it is really not exempt from this risk.\n\n>Either way, in my view, you had better review the code, have an\n>understanding of how it works, and make sure that the change you're\n>making is correct and makes sense, and that you've tested it well.\nThis view is very correct.\n\n>Then I'd suggest that you spend some time looking at each case and\n>working through what the best approach is and proposing patches that use\n>the best approach in each case. If you don't wish to spend time on\n>that, that's fine, but I don't agree with this approach of just pushing\n>through and making things changes just to satisfy a particular compiler\n>warning. I don't anticipate further discussion on this changing my mind\n>on this point.\n\n>I've already pointed out why I don't think this is the right approach to\n>be addressing these issues, and it seems that you don't disagree with me\n>about the recommended changes I've suggested, you've just said that you\n>only want to think about or care about renaming of variables and I am\n>specifically saying that's not an acceptable approach to addressing\n>these issues.\n\n>Why is that an issue?\nIt's not anymore.\n\n>Feel free to monitor the situation and complain about new patches which\n>are proposed that add to them. I don't have any particular problem with\n>that. Nor do I object to generally pushing forward with the goal of\n>eliminating the existing ones.\nI would better it safer to do this automatically in the course of development.\nAlone, the risk of failure is high.\n\n>Let me lay this out in a different way- we could do the exact same thing\n>you're doing here by just mindlessly changing, right before we commit,\n>any variables that shadow global variables, we'd eliminate the compiler\n>error, but it doesn't directly make anything *better* by itself, and\n>ultimately isn't really all that different from the current situation\n>where the compiler is essentially doing this for us by manging the\n>variables as shadowing the globals thanks to C scoping rules, except\n>that we add in the possibility of mixing usage of the local and the\n>global throughout the functions therefore adding to the confusion.\nThis is the light.\n\n>That doesn't actually make any of them bugs though.\nTruth.\n\n>I'm all for working to eliminate these shadow variables, but, again, not\n>through rote renaming of the locals without putting in any real thought\n>about what the code is doing and working out what the right approach to\n>such a change to eliminate the shadow variables should be.\n>Ok, so this specific \"global unshadow patch\" is all about bugs, but\n>without actually showing that there's actual bugs here, just that there\n>are shadowed variables... In which case, the real question is \"is this\n>change an improvement to the code\" and I'm arguing that just the act of\n>changing the variable names to avoid shadowing isn't necessairly a code\n>improvement- that has to be evaluated on a case-by-case basis. If\n>you're not going to do that evaluation then you're just throwing changes\n>at the community with the expectation that someone else will do the\n>analysis and decide if the changes are worthwhile or not and that\n>strikes me as not really being very helpful. I'd really rather work\n>towards patches that are clear improvements which have been well\n>considered by the proposer of the patch.\nSorry for not being able to answer point by point, your considerations.\nbut I think I finally got to understand them correctly.\nBy renaming, we would be hiding the dirt under the carpet.\nAnd that is absolutely what I want.\n From your point of view and from John, the fact that the compiler warns us of the dangers of collisions is much better than simply turning them off by renaming them.\nSeeing that, I have to accept.\n\n1.So I would like to ask you if at least what has consensus could be used.\nOr is it better to leave everything as it is?\n\n2.About local shadow variables, would you find it safe to do redundant declaration removals of the type: int i? Is it worth it to work on that?\n\nBest regards,\nRanier Vilela\n\n\n\n\n\n\n\nDe: Stephen Frost\n\nEnviadas: Quarta-feira, 11 de Dezembro de 2019 18:57\n>I really don't have any doubts that it's going to lead to confusion,\n>particularly in a case like the numTables vs. nTables thing you're\n>proposing in the one case that I spent some time looking at, and that\n>confusion certainly could lead to bugs. Sure, having shadow variables\n>also could- but you haven't identified an actual bug there today, so why\n>not just fix it in a way that eliminates the confusion here?\nI'm starting to think you're absolutely right.\n\n>Here's an example of my concern- we change the name of the numTables\n>variable in these pg_dump functions to nTables as you propose... And\n>then later on someone starts hacking on these functions and they know\n>about the global and they start using it, so now we've got two\n>variables, both able to be used in the same scope, but one of them is a\n>global and the other is a local. As long as both are always the same,\n>sure everything works- but what happens if, for whatever reason, someone\n>uses the function in a new way and passes in a different value as an\n>argument, one that doesn't match what the global has? Well, some of the\n>code will use the argument, and some of the code won't. At least today,\n>there's no chance that the global variable will be used inside that\n>function- it's *always* going to use the argument passed in.\nUnderstood.\n\n>I don't think that's even that far-fetched of a possibility considering\n>most of the code is using the global variable directly and these\n>functions are really the odd ones where numTables is being passed in as\n>an argument, so ending up with a mix in the function looks rather likely\n>to happen, and a bug resulting from that inconsistency entirely\n>possible.\nYes.\n\n>It's also possbile that the changes you're proposing might themselves\n>induce bugs- by keeping the variable and just renaming it, you had\n>better be ABSOLUTELY sure you rename every case because, if you don't,\n>everything will still work just *fine*, except where you missed a case,\n>the code will reference the global and the compiler won't complain and\n>it might very well look like everything is working.\nI can tell you that I tried to take every precaution in that direction.\nBut it is really not exempt from this risk.\n\n>Either way, in my view, you had better review the code, have an\n>understanding of how it works, and make sure that the change you're\n>making is correct and makes sense, and that you've tested it well.\nThis view is very correct.\n\n>Then I'd suggest that you spend some time looking at each case and\n>working through what the best approach is and proposing patches that use\n>the best approach in each case. If you don't wish to spend time on\n>that, that's fine, but I don't agree with this approach of just pushing\n>through and making things changes just to satisfy a particular compiler\n>warning. I don't anticipate further discussion on this changing my mind\n>on this point.\n\n>I've already pointed out why I don't think this is the right approach to\n>be addressing these issues, and it seems that you don't disagree with me\n>about the recommended changes I've suggested, you've just said that you\n>only want to think about or care about renaming of variables and I am\n>specifically saying that's not an acceptable approach to addressing\n>these issues.\n\n>Why is that an issue?\nIt's not anymore.\n\n>Feel free to monitor the situation and complain about new patches which\n>are proposed that add to them. I don't have any particular problem with\n>that. Nor do I object to generally pushing forward with the goal of\n>eliminating the existing ones.\nI would better it safer to do this automatically in the course of development.\nAlone, the risk of failure is high.\n\n>Let me lay this out in a different way- we could do the exact same thing\n>you're doing here by just mindlessly changing, right before we commit,\n>any variables that shadow global variables, we'd eliminate the compiler\n>error, but it doesn't directly make anything *better* by itself, and\n>ultimately isn't really all that different from the current situation\n>where the compiler is essentially doing this for us by manging the\n>variables as shadowing the globals thanks to C scoping rules, except\n>that we add in the possibility of mixing usage of the local and the\n>global throughout the functions therefore adding to the confusion.\nThis is the light.\n\n>That doesn't actually make any of them bugs though.\nTruth.\n\n>I'm all for working to eliminate these shadow variables, but, again, not\n>through rote renaming of the locals without putting in any real thought\n>about what the code is doing and working out what the right approach to\n>such a change to eliminate the shadow variables should be.\n>Ok, so this specific \"global unshadow patch\" is all about bugs, but\n>without actually showing that there's actual bugs here, just that there\n>are shadowed variables... In which case, the real question is \"is this\n>change an improvement to the code\" and I'm arguing that just the act of\n>changing the variable names to avoid shadowing isn't necessairly a code\n>improvement- that has to be evaluated on a case-by-case basis. If\n>you're not going to do that evaluation then you're just throwing changes\n>at the community with the expectation that someone else will do the\n>analysis and decide if the changes are worthwhile or not and that\n>strikes me as not really being very helpful. I'd really rather work\n>towards patches that are clear improvements which have been well\n>considered by the proposer of the patch.\nSorry for not being able to answer point by point, your considerations.\nbut I think I finally got to understand them correctly.\nBy renaming, we would be hiding the dirt under the carpet.\nAnd that is absolutely what I want.\n From your point of view and from John, the fact that the compiler warns us of the dangers of collisions is much better than simply turning them off by renaming them.\nSeeing that, I have to accept.\n\n1.So I would like to ask you if at least what has consensus could be used.\nOr is it better to leave everything as it is?\n\n2.About local shadow variables, would you find it safe to do redundant declaration removals of the type: int i? Is it worth it to work on that?\n\nBest regards,\nRanier Vilela",
"msg_date": "Wed, 11 Dec 2019 21:05:22 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
},
{
"msg_contents": "Greetings,\n\n* Ranier Vilela (ranier_gyn@hotmail.com) wrote:\n> 1.So I would like to ask you if at least what has consensus could be used.\n> Or is it better to leave everything as it is?\n\nAs I tried to say before- I'm all for working to eliminate the shadow\nvariables, but it should be on a case-by-case basis where the change\nproposed is a well considered change by someone who has taken the time\nto understand what the code is doing, looked at the call sites, made a\nwell reasoned argument for why the change is an improvement and reduces\nconfusion (which can't just be \"because the compiler said we should make\nthis change\"- compilers aren't nearly intelligent enough to give us the\nright answer about what the code should look like- they can only point\nout potential issues, and they're often bad at even doing that), and\nthen proposed a patch for each particular case where the patch is\naddressing a specific set of shadow variable cases that somehow go\ntogether.\n\nA patch to address the numTables issues in pg_dump would be great, for\nexample. A single patch that renames numTables in pg_dump and then\nmakes a bunch of completely unrelated changes to things in the backend\nisn't what I'd consider a reasonable grouping of changes.\n\n> 2.About local shadow variables, would you find it safe to do redundant declaration removals of the type: int i? Is it worth it to work on that?\n\nI really don't think you're going to find much in the PG code where\nthere would be general consensus of a broad renaming or modifying of\nvariables without having to put serious thought into the specific\nchange.\n\nAt least, I hope people would push back on that kind of rote change.\n\nIn other words, without looking at the specific cases you're talking\nabout, I don't know if I'd agree with them or not, but please don't just\nsubmit patches that just rename things without having looked at the\ncode, gained some understanding of what the code does, and considered if\nthe change you want to make is a well reasoned improvement and makes the\ncode easier to read and understand.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Dec 2019 16:44:55 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Level4 Warnings show many shadow vars"
}
] |
[
{
"msg_contents": "While reviewing the first patch in Asif Rehman's series of patches for\nparallel backup over at\nhttp://postgr.es/m/CADM=Jeg3ZN+kPQpiSfeWCXr=xgpLrq4cBQE5ZviUxygKq3VqiA@mail.gmail.com\nI discovered that commit 7117685461af50f50c03f43e6a622284c8d54694\nintroduced a use of cancel_before_shmem_exit which falsifies the\ncomment for that function. So you can cause a spurious WARNING in the\nlogs by doing something like this, with max_prepared_transactions set\nto a non-zero value:\n\nselect pg_start_backup('one', false, false);\nbegin;\nprepare transaction 'nothing';\nselect pg_stop_backup(false);\n\\q\n\nin the server log:\nWARNING: aborting backup due to backend exiting before pg_stop_backup\nwas called\n\nAnd you can cause an assertion failure like this:\n\nselect pg_start_backup('one', false, false);\nbegin;\nprepare transaction 'nothing';\nselect pg_stop_backup(false);\nselect pg_start_backup('two');\n\\q\n\nWe've discussed before the idea that it might be good to remove the\nlimitation that before_shmem_exit() can only remove the\nmost-recently-added callback, which would be one way to fix this\nproblem, but I'd like to propose an alternative solution which I think\nwill work out more nicely for the patch mentioned above: don't use\ncancel_before_shmem_exit, and just leave the callback registered for\nthe lifetime of the backend. That requires some adjustment of the\ncallback, since it needs to tolerate exclusive backup mode being in\neffect.\n\nThe attached patch takes that approach. Thoughts welcome on (1) the\napproach and (2) whether to back-patch. I think there's little doubt\nthat this is formally a bug, but it's a pretty minor one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 11 Dec 2019 17:32:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "Hello.\n\nAt Wed, 11 Dec 2019 17:32:05 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> While reviewing the first patch in Asif Rehman's series of patches for\n> parallel backup over at\n> http://postgr.es/m/CADM=Jeg3ZN+kPQpiSfeWCXr=xgpLrq4cBQE5ZviUxygKq3VqiA@mail.gmail.com\n> I discovered that commit 7117685461af50f50c03f43e6a622284c8d54694\n> introduced a use of cancel_before_shmem_exit which falsifies the\n> comment for that function. So you can cause a spurious WARNING in the\n> logs by doing something like this, with max_prepared_transactions set\n> to a non-zero value:\n> \n> select pg_start_backup('one', false, false);\n> begin;\n> prepare transaction 'nothing';\n> select pg_stop_backup(false);\n> \\q\n> \n> in the server log:\n> WARNING: aborting backup due to backend exiting before pg_stop_backup\n> was called\n> \n> And you can cause an assertion failure like this:\n> \n> select pg_start_backup('one', false, false);\n> begin;\n> prepare transaction 'nothing';\n> select pg_stop_backup(false);\n> select pg_start_backup('two');\n> \\q\n> \n> We've discussed before the idea that it might be good to remove the\n> limitation that before_shmem_exit() can only remove the\n> most-recently-added callback, which would be one way to fix this\n> problem, but I'd like to propose an alternative solution which I think\n> will work out more nicely for the patch mentioned above: don't use\n> cancel_before_shmem_exit, and just leave the callback registered for\n> the lifetime of the backend. That requires some adjustment of the\n> callback, since it needs to tolerate exclusive backup mode being in\n> effect.\n> \n> The attached patch takes that approach. Thoughts welcome on (1) the\n> approach and (2) whether to back-patch. I think there's little doubt\n> that this is formally a bug, but it's a pretty minor one.\n\nThe direction seems reasonable, but the patch doesn't free up the\nbefore_shmem_exec slot nor avoid duplicate registration of the\ncallback. Actually before_shmem_exit_list gets bloat with multiple\ndo_pg_abort_backup entries through repeated rounds of non-exclusive\nbackups.\n\nAs the result, if one ends a session while a non-exclusive backup is\nactive after closing the previous non-exclusive backup,\ndo_pg_abort_backup aborts for assertion failure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 Dec 2019 13:57:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 5:58 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> Hello.\n>\n> At Wed, 11 Dec 2019 17:32:05 -0500, Robert Haas <robertmhaas@gmail.com>\n> wrote in\n> > While reviewing the first patch in Asif Rehman's series of patches for\n> > parallel backup over at\n> >\n> http://postgr.es/m/CADM=Jeg3ZN+kPQpiSfeWCXr=xgpLrq4cBQE5ZviUxygKq3VqiA@mail.gmail.com\n> > I discovered that commit 7117685461af50f50c03f43e6a622284c8d54694\n> > introduced a use of cancel_before_shmem_exit which falsifies the\n> > comment for that function. So you can cause a spurious WARNING in the\n> > logs by doing something like this, with max_prepared_transactions set\n> > to a non-zero value:\n> >\n> > select pg_start_backup('one', false, false);\n> > begin;\n> > prepare transaction 'nothing';\n> > select pg_stop_backup(false);\n> > \\q\n> >\n> > in the server log:\n> > WARNING: aborting backup due to backend exiting before pg_stop_backup\n> > was called\n> >\n> > And you can cause an assertion failure like this:\n> >\n> > select pg_start_backup('one', false, false);\n> > begin;\n> > prepare transaction 'nothing';\n> > select pg_stop_backup(false);\n> > select pg_start_backup('two');\n> > \\q\n> >\n> > We've discussed before the idea that it might be good to remove the\n> > limitation that before_shmem_exit() can only remove the\n> > most-recently-added callback, which would be one way to fix this\n> > problem, but I'd like to propose an alternative solution which I think\n> > will work out more nicely for the patch mentioned above: don't use\n> > cancel_before_shmem_exit, and just leave the callback registered for\n> > the lifetime of the backend. That requires some adjustment of the\n> > callback, since it needs to tolerate exclusive backup mode being in\n> > effect.\n> >\n> > The attached patch takes that approach. Thoughts welcome on (1) the\n> > approach and (2) whether to back-patch. I think there's little doubt\n> > that this is formally a bug, but it's a pretty minor one.\n>\n> The direction seems reasonable, but the patch doesn't free up the\n> before_shmem_exec slot nor avoid duplicate registration of the\n> callback. Actually before_shmem_exit_list gets bloat with multiple\n> do_pg_abort_backup entries through repeated rounds of non-exclusive\n> backups.\n>\n> As the result, if one ends a session while a non-exclusive backup is\n> active after closing the previous non-exclusive backup,\n> do_pg_abort_backup aborts for assertion failure.\n>\n\n+1, I also think the direction seems perfectly reasonable, but we should\navoid re-adding the callback since we're not removing it. Leaving it around\nseems cheap enough as long as there is only one.\n\nMy first reaction would be to just disallow the combination of prepared\ntransactions and start/stop backups. But lookingat it it seems like an\nunnecessray restriction and an approach like this one seems better.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Dec 12, 2019 at 5:58 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:Hello.\n\nAt Wed, 11 Dec 2019 17:32:05 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> While reviewing the first patch in Asif Rehman's series of patches for\n> parallel backup over at\n> http://postgr.es/m/CADM=Jeg3ZN+kPQpiSfeWCXr=xgpLrq4cBQE5ZviUxygKq3VqiA@mail.gmail.com\n> I discovered that commit 7117685461af50f50c03f43e6a622284c8d54694\n> introduced a use of cancel_before_shmem_exit which falsifies the\n> comment for that function. So you can cause a spurious WARNING in the\n> logs by doing something like this, with max_prepared_transactions set\n> to a non-zero value:\n> \n> select pg_start_backup('one', false, false);\n> begin;\n> prepare transaction 'nothing';\n> select pg_stop_backup(false);\n> \\q\n> \n> in the server log:\n> WARNING: aborting backup due to backend exiting before pg_stop_backup\n> was called\n> \n> And you can cause an assertion failure like this:\n> \n> select pg_start_backup('one', false, false);\n> begin;\n> prepare transaction 'nothing';\n> select pg_stop_backup(false);\n> select pg_start_backup('two');\n> \\q\n> \n> We've discussed before the idea that it might be good to remove the\n> limitation that before_shmem_exit() can only remove the\n> most-recently-added callback, which would be one way to fix this\n> problem, but I'd like to propose an alternative solution which I think\n> will work out more nicely for the patch mentioned above: don't use\n> cancel_before_shmem_exit, and just leave the callback registered for\n> the lifetime of the backend. That requires some adjustment of the\n> callback, since it needs to tolerate exclusive backup mode being in\n> effect.\n> \n> The attached patch takes that approach. Thoughts welcome on (1) the\n> approach and (2) whether to back-patch. I think there's little doubt\n> that this is formally a bug, but it's a pretty minor one.\n\nThe direction seems reasonable, but the patch doesn't free up the\nbefore_shmem_exec slot nor avoid duplicate registration of the\ncallback. Actually before_shmem_exit_list gets bloat with multiple\ndo_pg_abort_backup entries through repeated rounds of non-exclusive\nbackups.\n\nAs the result, if one ends a session while a non-exclusive backup is\nactive after closing the previous non-exclusive backup,\ndo_pg_abort_backup aborts for assertion failure.+1, I also think the direction seems perfectly reasonable, but we should avoid re-adding the callback since we're not removing it. Leaving it around seems cheap enough as long as there is only one.My first reaction would be to just disallow the combination of prepared transactions and start/stop backups. But lookingat it it seems like an unnecessray restriction and an approach like this one seems better. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 12 Dec 2019 13:52:31 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 01:52:31PM +0100, Magnus Hagander wrote:\n> On Thu, Dec 12, 2019 at 5:58 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n>> The direction seems reasonable, but the patch doesn't free up the\n>> before_shmem_exec slot nor avoid duplicate registration of the\n>> callback. Actually before_shmem_exit_list gets bloat with multiple\n>> do_pg_abort_backup entries through repeated rounds of non-exclusive\n>> backups.\n>>\n>> As the result, if one ends a session while a non-exclusive backup is\n>> active after closing the previous non-exclusive backup,\n>> do_pg_abort_backup aborts for assertion failure.\n\nAgreed, that's an issue and do_pg_abort_abort should not touch\nsessionBackupState, so you should keep cancel_before_shmem_exit after\npg_stop_backup(). Other than that, I have looked in details at how\nsafe it is to move before_shmem_exit(do_pg_abort_backup) before\ndo_pg_start_backup() and the cleanups of nonExclusiveBackups happen\nsafely and consistently in the event of an error during\npg_start_backup().\n\n> +1, I also think the direction seems perfectly reasonable, but we should\n> avoid re-adding the callback since we're not removing it. Leaving it around\n> seems cheap enough as long as there is only one.\n\n+ (errmsg(\"aborting backup due to backend exiting before\npg_stop_back up was called\")));\nNot sure that pg_stop_back exists ;p\n\n> My first reaction would be to just disallow the combination of prepared\n> transactions and start/stop backups. But looking at it it seems like an\n> unnecessary restriction and an approach like this one seems better.\n\nI think that's a bad idea to put a restriction of this kind. There\nare large consumers of 2PC, and everybody needs backups.\n--\nMichael",
"msg_date": "Fri, 13 Dec 2019 17:00:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Dec 12, 2019 at 01:52:31PM +0100, Magnus Hagander wrote:\n> > On Thu, Dec 12, 2019 at 5:58 AM Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com>\n> > wrote:\n>\n> > My first reaction would be to just disallow the combination of prepared\n> > transactions and start/stop backups. But looking at it it seems like an\n> > unnecessary restriction and an approach like this one seems better.\n>\n> I think that's a bad idea to put a restriction of this kind. There\n> are large consumers of 2PC, and everybody needs backups.\n>\n\nYou misunderstood me. I certainly didn't mean that people who use 2PC\nshouldn't be able to use proper backups -- that would be *terrible*.\n\nI meant disallowing pg_start_backup() in a session that had a prepared\ntransaction, and disallowing preparing a transaction in a session with an\nongoing backup. They would still work perfectly fine in *other* parallel\nsessions.\n\nThat said, being able to do it in the session itself is of course even\nbetter.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Dec 13, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Dec 12, 2019 at 01:52:31PM +0100, Magnus Hagander wrote:\n> On Thu, Dec 12, 2019 at 5:58 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> My first reaction would be to just disallow the combination of prepared\n> transactions and start/stop backups. But looking at it it seems like an\n> unnecessary restriction and an approach like this one seems better.\n\nI think that's a bad idea to put a restriction of this kind. There\nare large consumers of 2PC, and everybody needs backups.You misunderstood me. I certainly didn't mean that people who use 2PC shouldn't be able to use proper backups -- that would be *terrible*.I meant disallowing pg_start_backup() in a session that had a prepared transaction, and disallowing preparing a transaction in a session with an ongoing backup. They would still work perfectly fine in *other* parallel sessions.That said, being able to do it in the session itself is of course even better. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 13 Dec 2019 09:56:49 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On 12/13/19 3:56 AM, Magnus Hagander wrote:\n> On Fri, Dec 13, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz \n> \n> I think that's a bad idea to put a restriction of this kind. There\n> are large consumers of 2PC, and everybody needs backups.\n> \n> \n> You misunderstood me. I certainly didn't mean that people who use 2PC \n> shouldn't be able to use proper backups -- that would be *terrible*.\n> \n> I meant disallowing pg_start_backup() in a session that had a prepared \n> transaction, and disallowing preparing a transaction in a session with \n> an ongoing backup. They would still work perfectly fine in *other* \n> parallel sessions.\n\n+1. I think it is reasonable to expect pg_start/stop_backup() to be \nperformed in its own session without prepared transactions.\n\n+more if this concession keeps other aspects of the code simpler.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 13 Dec 2019 18:50:25 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "At Fri, 13 Dec 2019 18:50:25 -0500, David Steele <david@pgmasters.net> wrote in \n> On 12/13/19 3:56 AM, Magnus Hagander wrote:\n> > On Fri, Dec 13, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz I\n> > think that's a bad idea to put a restriction of this kind. There\n> > are large consumers of 2PC, and everybody needs backups.\n> > You misunderstood me. I certainly didn't mean that people who use 2PC\n> > shouldn't be able to use proper backups -- that would be *terrible*.\n> > I meant disallowing pg_start_backup() in a session that had a prepared\n> > transaction, and disallowing preparing a transaction in a session with\n> > an ongoing backup. They would still work perfectly fine in *other*\n> > parallel sessions.\n> \n> +1. I think it is reasonable to expect pg_start/stop_backup() to be\n> performed in its own session without prepared transactions.\n> \n> +more if this concession keeps other aspects of the code simpler.\n\nHowever I don't object to the restriction, couldn't we allow the\ncancel_before_shmem_exit to search for the given entry looping over\nthe before_shmem_exit array? If we don't do that, an assrtion is needed\ninstead.\n\nSince pg_stop_backup_v2 is the only caller to the function in the\nwhole server code, making cancel_before_shmem_exit a bit wiser (and\nslower) cannot hurt anyone.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 16 Dec 2019 10:43:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Sun, Dec 15, 2019 at 8:44 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> However I don't object to the restriction, couldn't we allow the\n> cancel_before_shmem_exit to search for the given entry looping over\n> the before_shmem_exit array? If we don't do that, an assrtion is needed\n> instead.\n>\n> Since pg_stop_backup_v2 is the only caller to the function in the\n> whole server code, making cancel_before_shmem_exit a bit wiser (and\n> slower) cannot hurt anyone.\n\nThat's actually not true. It's called from\nPG_END_ENSURE_ERROR_CLEANUP. Still, it wouldn't cost a lot to fix this\nthat way. However, I think that it's better to fix it the other way,\nas I mentioned in my original email.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Dec 2019 14:18:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 4:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Dec 15, 2019 at 8:44 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > However I don't object to the restriction, couldn't we allow the\n> > cancel_before_shmem_exit to search for the given entry looping over\n> > the before_shmem_exit array? If we don't do that, an assrtion is needed\n> > instead.\n> >\n> > Since pg_stop_backup_v2 is the only caller to the function in the\n> > whole server code, making cancel_before_shmem_exit a bit wiser (and\n> > slower) cannot hurt anyone.\n>\n> That's actually not true. It's called from\n> PG_END_ENSURE_ERROR_CLEANUP. Still, it wouldn't cost a lot to fix this\n> that way. However, I think that it's better to fix it the other way,\n> as I mentioned in my original email.\n\n+1\n\nNot only PREPARE but also other commands that we may add in the future\ncan cause the same issue, so it's better to address the root cause rather\nthan working around by disallowing PREPARE.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 17 Dec 2019 11:46:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "At Tue, 17 Dec 2019 11:46:03 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> On Tue, Dec 17, 2019 at 4:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Sun, Dec 15, 2019 at 8:44 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > However I don't object to the restriction, couldn't we allow the\n> > > cancel_before_shmem_exit to search for the given entry looping over\n> > > the before_shmem_exit array? If we don't do that, an assrtion is needed\n> > > instead.\n> > >\n> > > Since pg_stop_backup_v2 is the only caller to the function in the\n> > > whole server code, making cancel_before_shmem_exit a bit wiser (and\n> > > slower) cannot hurt anyone.\n> >\n> > That's actually not true. It's called from\n> > PG_END_ENSURE_ERROR_CLEANUP. Still, it wouldn't cost a lot to fix this\n> > that way. However, I think that it's better to fix it the other way,\n> > as I mentioned in my original email.\n\nSorry. I knew that.\n\n> +1\n> \n> Not only PREPARE but also other commands that we may add in the future\n> can cause the same issue, so it's better to address the root cause rather\n> than working around by disallowing PREPARE.\n\nI stand on that side. I'm not sure what we are thinking as the root\ncause, but PREPARE is avoiding duplicate registration using the static\nbool twophaseExitRegistered and the most reasonable way to fix the\ncrash of the current patch would be to do the same thing as\nPREPARE. The attached does that and changes the if condition of\ncancel_before_shmem_exit into assertion.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 17 Dec 2019 15:11:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "At Tue, 17 Dec 2019 15:11:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 17 Dec 2019 11:46:03 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> PREPARE. The attached does that and changes the if condition of\n> cancel_before_shmem_exit into assertion.\n\nThe patch can cause removal of a wrong cleanup function on non-cassert\nbuild. That might be unwanted. But I think the assertion is needed\nanyway.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 17 Dec 2019 15:30:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 1:31 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> The patch can cause removal of a wrong cleanup function on non-cassert\n> build. That might be unwanted. But I think the assertion is needed\n> anyway.\n\nI agree with the first part of this critique, but not necessarily with\nthe second part. Right now, if you call cancel_before_shmem_exit(),\nyou might not remove the handler that you intended to remove, but you\nwon't remove some unrelated handler. With the patch, if assertions are\ndisabled, you will definitely remove something, but it might not be\nthe thing you intended to remove. That seems worse.\n\nOn the question of whether the assertion is needed, it is currently\nthe case that you could call cancel_before_shmem_exit() without\nknowing whether you'd actually registered a handler or not. With the\nproposed change, that would no longer be legal. Maybe that's OK. But\nit doesn't seem entirely crazy to have some error-recovery path where\ncancel_before_shmem_exit() could get called twice in obscure\ncircumstances, and right now, you can rest easy, knowing that the\nsecond call just won't do anything. If we make it an assertion failure\nto do such things, then you can't. On the other hand, maybe there's no\nuse for such a construct, and it'd be better to just reduce confusion.\nAnyway, I think this is a separate topic from fixing this specific\nproblem.\n\nSince there doesn't seem to be any opposition to my original fix,\nexcept for the fact that I included a bug in it, I'm going to go see\nabout getting that committed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Dec 2019 08:38:13 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 3:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Agreed, that's an issue and do_pg_abort_abort should not touch\n> sessionBackupState, so you should keep cancel_before_shmem_exit after\n> pg_stop_backup().\n\nI don't understand this comment, because that can't possibly work. It\nassumes either that nobody else is allowed to use before_shmem_exit()\nafter we do, or that cancel_before_shmem_exit() does something that it\ndoesn't actually do.\n\nIn general, before_shmem_exit() callbacks are intended to be\npersistent, and therefore it's the responsibility of the callback to\ntest whether any work needs to be done. This particular callback is an\nexception, assuming that it can remove itself when there's no longer\nany work to be done.\n\n> Other than that, I have looked in details at how\n> safe it is to move before_shmem_exit(do_pg_abort_backup) before\n> do_pg_start_backup() and the cleanups of nonExclusiveBackups happen\n> safely and consistently in the event of an error during\n> pg_start_backup().\n\nI came to the same conclusion, but I think it's still better to\nregister the callback first. If the callback is properly written to do\nnothing when there's nothing to do, then having it registered earlier\nis harmless. And if, in the future, do_pg_start_backup() should be\nchanged in such a way that, say, it can throw an error at the very\nend, then registering the handler first would prevent that from being\na bug.\n\nIt is generally more robust to register a cleanup handler in advance\nand then count on it to do the right thing than to try to write code\nthat isn't allowed to fail in the wrong place.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Dec 2019 09:15:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 8:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Since there doesn't seem to be any opposition to my original fix,\n> except for the fact that I included a bug in it, I'm going to go see\n> about getting that committed.\n\nPerhaps I spoke too soon: I'm not sure whether Michael's comments\namount to an objection. While I give him a chance to respond, here's\nan updated patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 17 Dec 2019 12:52:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Perhaps I spoke too soon: I'm not sure whether Michael's comments\n> amount to an objection. While I give him a chance to respond, here's\n> an updated patch.\n\nTook a quick look. I agree that this seems a lot cleaner than the\nalternative proposals. I'd suggest however that the header comment\nfor do_pg_abort_backup could do with more work, perhaps along the\nlines of \"The odd-looking signature allows this to be registered\ndirectly as a shmem_exit handler\".\n\nPersonally I'd have kept the handler as a separate function that's just\na one-line wrapper around \"void do_pg_abort_backup(bool emit_warning)\".\nWe don't usually treat callbacks as functions to be also called in\ntheir own right. But if you don't want to do that, I'll settle for an\nacknowledgement of the hack in the comment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Dec 2019 13:05:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 7:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Perhaps I spoke too soon: I'm not sure whether Michael's comments\n> > amount to an objection. While I give him a chance to respond, here's\n> > an updated patch.\n>\n> Took a quick look. I agree that this seems a lot cleaner than the\n> alternative proposals. I'd suggest however that the header comment\n> for do_pg_abort_backup could do with more work, perhaps along the\n> lines of \"The odd-looking signature allows this to be registered\n> directly as a shmem_exit handler\".\n>\n> Personally I'd have kept the handler as a separate function that's just\n> a one-line wrapper around \"void do_pg_abort_backup(bool emit_warning)\".\n> We don't usually treat callbacks as functions to be also called in\n> their own right. But if you don't want to do that, I'll settle for an\n> acknowledgement of the hack in the comment.\n>\n\nAs would I, but I'm also fine with either of the two ways.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 17, 2019 at 7:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> Perhaps I spoke too soon: I'm not sure whether Michael's comments\n> amount to an objection. While I give him a chance to respond, here's\n> an updated patch.\n\nTook a quick look. I agree that this seems a lot cleaner than the\nalternative proposals. I'd suggest however that the header comment\nfor do_pg_abort_backup could do with more work, perhaps along the\nlines of \"The odd-looking signature allows this to be registered\ndirectly as a shmem_exit handler\".\n\nPersonally I'd have kept the handler as a separate function that's just\na one-line wrapper around \"void do_pg_abort_backup(bool emit_warning)\".\nWe don't usually treat callbacks as functions to be also called in\ntheir own right. But if you don't want to do that, I'll settle for an\nacknowledgement of the hack in the comment.As would I, but I'm also fine with either of the two ways. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 17 Dec 2019 20:58:00 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "I wrote:\n> Took a quick look. I agree that this seems a lot cleaner than the\n> alternative proposals. I'd suggest however that the header comment\n> for do_pg_abort_backup could do with more work, perhaps along the\n> lines of \"The odd-looking signature allows this to be registered\n> directly as a shmem_exit handler\".\n\n> Personally I'd have kept the handler as a separate function that's just\n> a one-line wrapper around \"void do_pg_abort_backup(bool emit_warning)\".\n> We don't usually treat callbacks as functions to be also called in\n> their own right. But if you don't want to do that, I'll settle for an\n> acknowledgement of the hack in the comment.\n\nOh, scratch that --- looking closer, I see that the only two use-cases in\nthe patched code are via before_shmem_exit and PG_ENSURE_ERROR_CLEANUP,\nand both of those require a function with the signature of an on_exit\ncallback. So there's no need for a separate wrapper because this isn't\ngoing to be called any other way. I still recommend amending the\ncomment to explain why it has this signature, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Dec 2019 15:48:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 12:52:05PM -0500, Robert Haas wrote:\n> On Tue, Dec 17, 2019 at 8:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Since there doesn't seem to be any opposition to my original fix,\n>> except for the fact that I included a bug in it, I'm going to go see\n>> about getting that committed.\n> \n> Perhaps I spoke too soon: I'm not sure whether Michael's comments\n> amount to an objection. While I give him a chance to respond, here's\n> an updated patch.\n\n stoppoint = do_pg_stop_backup(label_file->data, waitforarchive, NULL);\n- cancel_before_shmem_exit(nonexclusive_base_backup_cleanup, (Datum) 0);\n[...]\n+void\n+register_persistent_abort_backup_handler(void)\n+{\n+ static bool already_done = false;\n+\n+ if (already_done)\n+ return;\nSo that's how you prevent piling up multiple registrations of this\ncallback compared to v1. FWIW, I think that it is a cleaner approach\nto remove the callback once a non-exclusive backup is done, because a\nsession has no need for it once it is done with its non-exclusive\nbackup, and this session may remain around for some time.\n\n+ if (emit_warning)\n+ ereport(WARNING,\n+ (errmsg(\"aborting backup due to backend exiting before\npg_stop_back up was called\")));\nThis warning is incorrect => \"pg_stop_back up\". (Mentioned upthread.)\n--\nMichael",
"msg_date": "Wed, 18 Dec 2019 08:40:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 6:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> So that's how you prevent piling up multiple registrations of this\n> callback compared to v1. FWIW, I think that it is a cleaner approach\n> to remove the callback once a non-exclusive backup is done, because a\n> session has no need for it once it is done with its non-exclusive\n> backup, and this session may remain around for some time.\n\nThe fact that the session may remain around for some time isn't really\nrelevant, because the callback isn't consuming any resources. It does\nnot increase memory usage by a single byte, nor CPU consumption\neither. It does consume a few CPU cycles when the backend finally\nexits, but the number of such cycles is very small and unrelated to\nthe length of the session. And removing the callback isn't entirely\nfree, either.\n\nI think the real point for me is that it's bad to have two sources of\ntruth. We have the sessionBackupState variable that tells us whether\nwe're in a backup, and then we separately have whether or not the\ncallback is registered. If those two things ever get out of sync, as\nthey do in the test cases that I've proposed, then we have problems --\nso it's better not to maintain the state in two separate ways.\n\nThe way it's set up right now actually seems quite fragile even apart\nfrom the problem with cancel_before_shmem_exit(). do_pg_stop_backup()\nsets sessionBackupState to SESSION_BACKUP_NONE and then does things\nthat might fail. If they do, then the cancel_before_shmem_exit()\ncallback will leave the callback installed, which can lead to a\nspurious warning or assertion failure later as in the original report.\nThe only way to avoid that problem would be to move the\ncancel_before_shmem_exit() callback so that it happens right next to\nsetting sessionBackupState to SESSION_BACKUP_NONE -- note the comments\nthere saying we can't even do WALInsertLockRelease() between updating\nXLogCtl and updating sessionBackupState. But that would be very ugly,\nbecause we'd then have to pass a flag to do_pg_stop_backup() saying\nwhether to remove the callback, since only one caller wants that\nbehavior.\n\nAnd, we'd have to change cancel_before_shmem_exit() to search the\nwhole array, which would almost certainly cost more cycles than\nleaving a do-nothing callback around.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Dec 2019 22:19:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 12:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Dec 17, 2019 at 6:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > So that's how you prevent piling up multiple registrations of this\n> > callback compared to v1. FWIW, I think that it is a cleaner approach\n> > to remove the callback once a non-exclusive backup is done, because a\n> > session has no need for it once it is done with its non-exclusive\n> > backup, and this session may remain around for some time.\n>\n> The fact that the session may remain around for some time isn't really\n> relevant, because the callback isn't consuming any resources. It does\n> not increase memory usage by a single byte, nor CPU consumption\n> either. It does consume a few CPU cycles when the backend finally\n> exits, but the number of such cycles is very small and unrelated to\n> the length of the session. And removing the callback isn't entirely\n> free, either.\n>\n> I think the real point for me is that it's bad to have two sources of\n> truth. We have the sessionBackupState variable that tells us whether\n> we're in a backup, and then we separately have whether or not the\n> callback is registered. If those two things ever get out of sync, as\n> they do in the test cases that I've proposed, then we have problems --\n> so it's better not to maintain the state in two separate ways.\n>\n> The way it's set up right now actually seems quite fragile even apart\n> from the problem with cancel_before_shmem_exit(). do_pg_stop_backup()\n> sets sessionBackupState to SESSION_BACKUP_NONE and then does things\n> that might fail. If they do, then the cancel_before_shmem_exit()\n> callback will leave the callback installed, which can lead to a\n> spurious warning or assertion failure later as in the original report.\n> The only way to avoid that problem would be to move the\n> cancel_before_shmem_exit() callback so that it happens right next to\n> setting sessionBackupState to SESSION_BACKUP_NONE -- note the comments\n> there saying we can't even do WALInsertLockRelease() between updating\n> XLogCtl and updating sessionBackupState. But that would be very ugly,\n> because we'd then have to pass a flag to do_pg_stop_backup() saying\n> whether to remove the callback, since only one caller wants that\n> behavior.\n>\n> And, we'd have to change cancel_before_shmem_exit() to search the\n> whole array, which would almost certainly cost more cycles than\n> leaving a do-nothing callback around.\n\nIf pg_abort_backup callback function can be called safely even when\nthe backup is not in progress, we can just use the global variable like\npg_abort_backup_registered to register the callback function only\non first call. In this way, cancel_before_shmem_exit() doesn't need to\nsearch the array to get rid of the function.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 18 Dec 2019 13:36:17 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "At Tue, 17 Dec 2019 15:48:40 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I wrote:\n> > Took a quick look. I agree that this seems a lot cleaner than the\n> > alternative proposals. I'd suggest however that the header comment\n> > for do_pg_abort_backup could do with more work, perhaps along the\n> > lines of \"The odd-looking signature allows this to be registered\n> > directly as a shmem_exit handler\".\n> \n> > Personally I'd have kept the handler as a separate function that's just\n> > a one-line wrapper around \"void do_pg_abort_backup(bool emit_warning)\".\n> > We don't usually treat callbacks as functions to be also called in\n> > their own right. But if you don't want to do that, I'll settle for an\n> > acknowledgement of the hack in the comment.\n> \n> Oh, scratch that --- looking closer, I see that the only two use-cases in\n> the patched code are via before_shmem_exit and PG_ENSURE_ERROR_CLEANUP,\n> and both of those require a function with the signature of an on_exit\n> callback. So there's no need for a separate wrapper because this isn't\n> going to be called any other way. I still recommend amending the\n> comment to explain why it has this signature, though.\n\nThe existing comment of do_pg_abort_backup follows seems to be a bit\nstale to me. I think it can be revised that way.\n\n * NB: This is only for aborting a non-exclusive backup that doesn't write\n * backup_label. A backup started with pg_start_backup() needs to be finished\n * with pg_stop_backup().\n\n\nDon't we need a regression test for this behavior?\n\n\n+ * Register a handler that will warn about unterminated backups at end of\n+ * session, unless this has already been done.\n\nThough I'm not sure of the necessity to do, it might need to mention\ncleaning up, which is actually done by the function.\n\n\nOther than the above looks good to me. The patch applies cleanly and\nworks as expected.\n\n\nIn another branch of this thread,\n\nAt Tue, 17 Dec 2019 08:38:13 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> On the question of whether the assertion is needed, it is currently\n> the case that you could call cancel_before_shmem_exit() without\n> knowing whether you'd actually registered a handler or not. With the\n> proposed change, that would no longer be legal. Maybe that's OK. But\n> it doesn't seem entirely crazy to have some error-recovery path where\n> cancel_before_shmem_exit() could get called twice in obscure\n> circumstances, and right now, you can rest easy, knowing that the\n> second call just won't do anything. If we make it an assertion failure\n> to do such things, then you can't. On the other hand, maybe there's no\n> use for such a construct, and it'd be better to just reduce confusion.\n> Anyway, I think this is a separate topic from fixing this specific\n> problem.\n\nThe part is not for the case \"without knowing whether you'd actually\nregistered a handler or not\", but for the case \"without knowing\nwhether someone could have registered a handler or not after the\nregistration I made\". But there is not so much difference and I don't\ninsist on that because of the reason you mentioned as above. Thanks\nfor elaborating.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 18 Dec 2019 13:49:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 11:36 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> If pg_abort_backup callback function can be called safely even when\n> the backup is not in progress, we can just use the global variable like\n> pg_abort_backup_registered to register the callback function only\n> on first call. In this way, cancel_before_shmem_exit() doesn't need to\n> search the array to get rid of the function.\n\nRight. That's how the proposed patch works.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 07:43:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 07:43:43AM -0500, Robert Haas wrote:\n> On Tue, Dec 17, 2019 at 11:36 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>> If pg_abort_backup callback function can be called safely even when\n>> the backup is not in progress, we can just use the global variable like\n>> pg_abort_backup_registered to register the callback function only\n>> on first call. In this way, cancel_before_shmem_exit() doesn't need to\n>> search the array to get rid of the function.\n> \n> Right. That's how the proposed patch works.\n\nWell, it seems like I am a poor lonesome cowboy on this one then.\nAnd what you are proposing does not break things either as far as I\nchecked, so I'll just go hide in a corner :)\n--\nMichael",
"msg_date": "Thu, 19 Dec 2019 16:13:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 3:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Oh, scratch that --- looking closer, I see that the only two use-cases in\n> the patched code are via before_shmem_exit and PG_ENSURE_ERROR_CLEANUP,\n> and both of those require a function with the signature of an on_exit\n> callback.\n\nYeah, that's why I was surprised that you wanted shim functions.\n\n> So there's no need for a separate wrapper because this isn't\n> going to be called any other way. I still recommend amending the\n> comment to explain why it has this signature, though.\n\nDone, and committed.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Dec 2019 09:08:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-exclusive backup cleanup is mildly broken"
}
] |
[
{
"msg_contents": "Folks,\n\nI've found myself writing a lot of boilerplate pg_hba.conf entries\nalong the lines of\n\n hostnossl all all 0.0.0.0/0 reject\n hostssl all all 0.0.0.0/0 md5\n\nso I thought I'd make it easier to do that from initdb.\n\nWhat say?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Thu, 12 Dec 2019 05:07:48 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> I've found myself writing a lot of boilerplate pg_hba.conf entries\n> along the lines of\n> hostnossl all all 0.0.0.0/0 reject\n> hostssl all all 0.0.0.0/0 md5\n> so I thought I'd make it easier to do that from initdb.\n> What say?\n\nI'm pretty suspicious of loading down initdb with random configuration\noptions, because I think most people nowadays use PG via vendor packages\nthat script their calls to initdb. So an option like this doesn't help\nunless you can persuade all those vendors to pass the option through.\n\nThat problem exists even before you get to the question of whether\nthis specific option is useful or well-designed ... a question I'm\nnot opining about here, but it would certainly require thought.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Dec 2019 00:23:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 12:23:42AM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > I've found myself writing a lot of boilerplate pg_hba.conf entries\n> > along the lines of\n> > hostnossl all all 0.0.0.0/0 reject\n> > hostssl all all 0.0.0.0/0 md5\n> > so I thought I'd make it easier to do that from initdb.\n> > What say?\n> \n> I'm pretty suspicious of loading down initdb with random configuration\n> options, because I think most people nowadays use PG via vendor packages\n> that script their calls to initdb. So an option like this doesn't help\n> unless you can persuade all those vendors to pass the option through.\n\nWould the official PGDG .deb and .rpm packages suffice?\n\n> That problem exists even before you get to the question of whether\n> this specific option is useful or well-designed ... a question I'm\n> not opining about here, but it would certainly require thought.\n\nI think it was a reasonable extension. We cover lines that start with\nlocal and host, but they can also start with hostssl and hostnossl.\n\nMeanwhile, please find attached a fix for an oversight around IPv6.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Thu, 12 Dec 2019 07:24:16 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On 2019-12-12 07:24, David Fetter wrote:\n>> That problem exists even before you get to the question of whether\n>> this specific option is useful or well-designed ... a question I'm\n>> not opining about here, but it would certainly require thought.\n> I think it was a reasonable extension. We cover lines that start with\n> local and host, but they can also start with hostssl and hostnossl.\n\nI suspect the real purpose here is to easily reject non-SSL connections \naltogether. This is currently quite cumbersome and requires careful \nongoing maintenance of pg_hba.conf. But I see two problems with the \nproposed approach: (1) initdb doesn't support setting up SSL, so the \nonly thing you can achieve here is to reject all TCP/IP connections, \nuntil you have set up SSL. (2) The default pg_hba.conf only covers \nlocalhost connections. The value of enforcing SSL connections to \nlocalhost is probably quite low. You still need ongoing careful \npg_hba.conf maintenance as you add more host entries.\n\nMaybe we just need something like libpq's sslmode on the server side. \nProbably not quite the same, perhaps just ssl = require.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Dec 2019 10:47:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 10:47:52AM +0100, Peter Eisentraut wrote:\n> On 2019-12-12 07:24, David Fetter wrote:\n> > > That problem exists even before you get to the question of whether\n> > > this specific option is useful or well-designed ... a question I'm\n> > > not opining about here, but it would certainly require thought.\n> > I think it was a reasonable extension. We cover lines that start with\n> > local and host, but they can also start with hostssl and hostnossl.\n> \n> I suspect the real purpose here is to easily reject non-SSL connections\n> altogether. This is currently quite cumbersome and requires careful ongoing\n> maintenance of pg_hba.conf.\n\nYes, and kinda. It's certainly possible to put lines high up in\npg_hba.conf that read:\n\nhostnossl all all 0.0.0.0/0 reject\nhostnossl all all ::/0 reject\n\nand then the only ongoing maintenance is not to put lines above them\nthat contradict it.\n\n> But I see two problems with the proposed approach: (1) initdb\n> doesn't support setting up SSL, so the only thing you can achieve\n> here is to reject all TCP/IP connections, until you have set up SSL.\n\nI don't believe any special setup is needed to require TLS for the\nconnection, which is what this patch handles in a straightforward way.\n\nSetting up cert-based auth is the hassle you describe.\n\n> (2) The default pg_hba.conf only covers localhost connections.\n\nAs of this patch, it can be asked to cover all connections.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 12 Dec 2019 16:12:15 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "From: David Fetter <david@fetter.org>\n> > But I see two problems with the proposed approach: (1) initdb\n> > doesn't support setting up SSL, so the only thing you can achieve\n> > here is to reject all TCP/IP connections, until you have set up SSL.\n> \n> I don't believe any special setup is needed to require TLS for the\n> connection, which is what this patch handles in a straightforward way.\n\nI think this feature can be useful because it's common to reject remote non-TLS connections. Eliminating the need to script for pg_hba.conf is welcome. Setting GUC parameters just after initdb is relatively easy, because we can simply add lines at the end of postgresql.conf. But pg_hba.conf is not because the first matching entry is effective.\n\nIn terms of rejecting non-secure remote connections, should hostgssenc/hostnogssenc also be handled similarly?\n\n\n> > (2) The default pg_hba.conf only covers localhost connections.\n> \n> As of this patch, it can be asked to cover all connections.\n\n+ <term><option>--auth-hostssl=<replaceable class=\"parameter\">authmethod</replaceable></option></term>\n+ <listitem>\n+ <para>\n+ This option specifies the authentication method for users via\nfg\n+ TLS connections used in <filename>pg_hba.conf</filename>\n+ (<literal>hostssl</literal> lines).\n+ </para>\n\t+ </listitem>\n\nThe relationship between --auth/--auth-local/--auth-host and --auth-hostssl/--auth-hostnossl is confusing. The former is for local connections, and the latter is for remote ones. Can we just add \"remote\" in the above documentation?\n\nPlus, you're adding the first option to initdb that handles remote connections. As the following execution shows, it doesn't warn about using \"trust\" for remote connections.\n\n\n$ initdb --auth=md5 --pwprompt --auth-hostssl=trust --auth-hostnossl=trust\n...\nsyncing data to disk ... ok\n\nSuccess. You can now start the database server using:\n\n pg_ctl -D /tuna/pg2 -l logfile start\n\n\n\nI think we should emit a warning message like the following existing one:\n\n--------------------------------------------------\ninitdb: warning: enabling \"trust\" authentication for local connections\nYou can change this by editing pg_hba.conf or using the option -A, or\n--auth-local and --auth-host, the next time you run initdb.\n-\ninitdb: warning: enabling \"trust\" authentication \n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 8 Jan 2020 02:53:47 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 02:53:47AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> From: David Fetter <david@fetter.org>\n> > > But I see two problems with the proposed approach: (1) initdb\n> > > doesn't support setting up SSL, so the only thing you can achieve\n> > > here is to reject all TCP/IP connections, until you have set up SSL.\n> > \n> > I don't believe any special setup is needed to require TLS for the\n> > connection, which is what this patch handles in a straightforward way.\n> \n> I think this feature can be useful because it's common to reject remote non-TLS connections. Eliminating the need to script for pg_hba.conf is welcome. Setting GUC parameters just after initdb is relatively easy, because we can simply add lines at the end of postgresql.conf. But pg_hba.conf is not because the first matching entry is effective.\n> \n> In terms of rejecting non-secure remote connections, should hostgssenc/hostnogssenc also be handled similarly?\n\nYes, and they are in the enclosed patch.\n\n> > > (2) The default pg_hba.conf only covers localhost connections.\n> > \n> > As of this patch, it can be asked to cover all connections.\n> \n> + <term><option>--auth-hostssl=<replaceable class=\"parameter\">authmethod</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + This option specifies the authentication method for users via\n> fg\n> + TLS connections used in <filename>pg_hba.conf</filename>\n> + (<literal>hostssl</literal> lines).\n> + </para>\n> \t+ </listitem>\n> \n> The relationship between --auth/--auth-local/--auth-host and --auth-hostssl/--auth-hostnossl is confusing. The former is for local connections, and the latter is for remote ones. Can we just add \"remote\" in the above documentation?\n\nDone.\n\n> Plus, you're adding the first option to initdb that handles remote connections. As the following execution shows, it doesn't warn about using \"trust\" for remote connections.\n> \n> \n> $ initdb --auth=md5 --pwprompt --auth-hostssl=trust --auth-hostnossl=trust\n> ...\n> syncing data to disk ... ok\n> \n> Success. You can now start the database server using:\n> \n> pg_ctl -D /tuna/pg2 -l logfile start\n> \n> \n> \n> I think we should emit a warning message like the following existing one:\n> \n> --------------------------------------------------\n> initdb: warning: enabling \"trust\" authentication for local connections\n> You can change this by editing pg_hba.conf or using the option -A, or\n> --auth-local and --auth-host, the next time you run initdb.\n> -\n> initdb: warning: enabling \"trust\" authentication \n\nDone.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Fri, 17 Jan 2020 20:47:49 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 08:47:49PM +0100, David Fetter wrote:\n> On Wed, Jan 08, 2020 at 02:53:47AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> > From: David Fetter <david@fetter.org>\n> > > > But I see two problems with the proposed approach: (1) initdb\n> > > > doesn't support setting up SSL, so the only thing you can achieve\n> > > > here is to reject all TCP/IP connections, until you have set up SSL.\n> > > \n> > > I don't believe any special setup is needed to require TLS for the\n> > > connection, which is what this patch handles in a straightforward way.\n> > \n> > I think this feature can be useful because it's common to reject remote non-TLS connections. Eliminating the need to script for pg_hba.conf is welcome. Setting GUC parameters just after initdb is relatively easy, because we can simply add lines at the end of postgresql.conf. But pg_hba.conf is not because the first matching entry is effective.\n> > \n> > In terms of rejecting non-secure remote connections, should hostgssenc/hostnogssenc also be handled similarly?\n> \n> Yes, and they are in the enclosed patch.\n> \n> > > > (2) The default pg_hba.conf only covers localhost connections.\n> > > \n> > > As of this patch, it can be asked to cover all connections.\n> > \n> > + <term><option>--auth-hostssl=<replaceable class=\"parameter\">authmethod</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + This option specifies the authentication method for users via\n> > fg\n> > + TLS connections used in <filename>pg_hba.conf</filename>\n> > + (<literal>hostssl</literal> lines).\n> > + </para>\n> > \t+ </listitem>\n> > \n> > The relationship between --auth/--auth-local/--auth-host and --auth-hostssl/--auth-hostnossl is confusing. The former is for local connections, and the latter is for remote ones. Can we just add \"remote\" in the above documentation?\n> \n> Done.\n> \n> > Plus, you're adding the first option to initdb that handles remote connections. As the following execution shows, it doesn't warn about using \"trust\" for remote connections.\n> > \n> > \n> > $ initdb --auth=md5 --pwprompt --auth-hostssl=trust --auth-hostnossl=trust\n> > ...\n> > syncing data to disk ... ok\n> > \n> > Success. You can now start the database server using:\n> > \n> > pg_ctl -D /tuna/pg2 -l logfile start\n> > \n> > \n> > \n> > I think we should emit a warning message like the following existing one:\n> > \n> > --------------------------------------------------\n> > initdb: warning: enabling \"trust\" authentication for local connections\n> > You can change this by editing pg_hba.conf or using the option -A, or\n> > --auth-local and --auth-host, the next time you run initdb.\n> > -\n> > initdb: warning: enabling \"trust\" authentication \n> \n> Done.\n> \n> Best,\n> David.\n\nThis time, with the patch attached.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 17 Jan 2020 21:15:29 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi \r\nI applied the patch \"v3-0001-Enable-setting-pg_hba.conf-permissions-from-initd.patch\" and did some verification with it. The intended feature works overall and I think it is quite useful to support default auth methods for ssl and gss host types. I have also found some minor things in the patch and would like to share as below:\r\n\r\n> +\"# CAUTION: Configuring the system for \\\"trust\\\" authentication\\n\" \\\r\n> +\"# allows any user who can reach the databse on the route specified\\n\" \\\r\n> +\"# to connect as any PostgreSQL user, including the database\\n\" \\\r\n> +\"# superuser. If you do not trust all the users who could\\n\" \\\r\n> +\"# reach the database on the route specified, use a more restrictive\\n\" \\\r\n> +\"# authentication method.\\n\"\r\n\r\nFound a typo: should be 'database' instead of 'databse'\r\n\r\n> * a sort of poor man's grep -v\r\n> */\r\n> -#ifndef HAVE_UNIX_SOCKETS\r\n> static char **\r\n> filter_lines_with_token(char **lines, const char *token)\r\n> {\r\n> @@ -461,7 +466,6 @@ filter_lines_with_token(char **lines, const char *token)\r\n> \r\n> \treturn result;\r\n> }\r\n> -#endif\r\n\r\nI see that you have removed \"#ifndef HAVE_UNIX_SOCKETS\" around the filter_lines_with_token() function definition so it would be always available, which is used to remove the @tokens@ in case user does not specify a default auth method for the new hostssl, hostgss options. I think you should also remove the \"#ifndef HAVE_UNIX_SOCKETS\" around its declaration as well so both function definition and declaration would make sense.\r\n\r\n#ifndef HAVE_UNIX_SOCKETS\r\nstatic char **filter_lines_with_token(char **lines, const char *token);\r\n#endif\r\n\r\nCary Huang\r\n-------------\r\nHighGo Software Inc. (Canada)\r\ncary.huang@highgo.ca\r\nwww.highgo.ca",
"msg_date": "Mon, 06 Apr 2020 22:12:16 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Mon, Apr 06, 2020 at 10:12:16PM +0000, Cary Huang wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> Hi \n> I applied the patch \"v3-0001-Enable-setting-pg_hba.conf-permissions-from-initd.patch\" and did some verification with it. The intended feature works overall and I think it is quite useful to support default auth methods for ssl and gss host types. I have also found some minor things in the patch and would like to share as below:\n> \n> > +\"# CAUTION: Configuring the system for \\\"trust\\\" authentication\\n\" \\\n> > +\"# allows any user who can reach the databse on the route specified\\n\" \\\n> > +\"# to connect as any PostgreSQL user, including the database\\n\" \\\n> > +\"# superuser. If you do not trust all the users who could\\n\" \\\n> > +\"# reach the database on the route specified, use a more restrictive\\n\" \\\n> > +\"# authentication method.\\n\"\n> \n> Found a typo: should be 'database' instead of 'databse'\n\nFixed.\n\n> > * a sort of poor man's grep -v\n> > */\n> > -#ifndef HAVE_UNIX_SOCKETS\n> > static char **\n> > filter_lines_with_token(char **lines, const char *token)\n> > {\n> > @@ -461,7 +466,6 @@ filter_lines_with_token(char **lines, const char *token)\n> > \n> > \treturn result;\n> > }\n> > -#endif\n> \n> I see that you have removed \"#ifndef HAVE_UNIX_SOCKETS\" around the\n> filter_lines_with_token() function definition so it would be always\n> available, which is used to remove the @tokens@ in case user does\n> not specify a default auth method for the new hostssl, hostgss\n> options. I think you should also remove the \"#ifndef\n> HAVE_UNIX_SOCKETS\" around its declaration as well so both function\n> definition and declaration would make sense.\n\nFixed.\n\nThanks very much for the review!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 8 Apr 2020 18:28:10 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "The CF Patch Tester consider this patch to be malformed and is unable to apply\nand test it. Can you please submit a rebased version?\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 2 Jul 2020 16:02:21 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Thu, Jul 02, 2020 at 04:02:21PM +0200, Daniel Gustafsson wrote:\n> The CF Patch Tester consider this patch to be malformed and is unable to apply\n> and test it. Can you please submit a rebased version?\n\nI have looked at the patch of this thread, and I doubt that it is a\ngood idea to put more burden into initdb for that. I agree that\nbeing able to reject easily non-SSL connections in pg_hba.conf is a\nbit of a hassle now, but putting more logic into initdb does not seem\nthe right course to me. Perhaps we could consider an idea like\nPeter's to have a sslmode=require on the server side and ease the\ngeneration of HBA rules..\n\nThe patch has stalled for two months now without a rebase provided, so\nI am marking it as returned with feedback.\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 11:57:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Mon, Sep 07, 2020 at 11:57:58AM +0900, Michael Paquier wrote:\n> On Thu, Jul 02, 2020 at 04:02:21PM +0200, Daniel Gustafsson wrote:\n> > The CF Patch Tester consider this patch to be malformed and is unable to apply\n> > and test it. Can you please submit a rebased version?\n> \n> I have looked at the patch of this thread, and I doubt that it is a\n> good idea to put more burden into initdb for that. I agree that\n> being able to reject easily non-SSL connections in pg_hba.conf is a\n> bit of a hassle now, but putting more logic into initdb does not seem\n> the right course to me. Perhaps we could consider an idea like\n> Peter's to have a sslmode=require on the server side and ease the\n> generation of HBA rules..\n> \n> The patch has stalled for two months now without a rebase provided, so\n> I am marking it as returned with feedback.\n\nPlease find attached the rebased patch.\n\nPeter's suggestion seems a little more subtle to me than requiring TLS\non the server side in that what people generally want to do is\ndisallow clear text connections entirely. In those scenarios, people\nwould also want to set (and be able to change at runtime) some kind of\ncryptographic policy, as SSH and TLS do. While I see this as a worthy\ngoal, it's a much bigger lift than an optional argument or two to\ninitdb, and requires a lot more discussion than it's had to date.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 30 Dec 2020 20:24:06 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 08:24:06PM +0100, David Fetter wrote:\n> On Mon, Sep 07, 2020 at 11:57:58AM +0900, Michael Paquier wrote:\n> > On Thu, Jul 02, 2020 at 04:02:21PM +0200, Daniel Gustafsson wrote:\n> > > The CF Patch Tester consider this patch to be malformed and is unable to apply\n> > > and test it. Can you please submit a rebased version?\n> > \n> > I have looked at the patch of this thread, and I doubt that it is a\n> > good idea to put more burden into initdb for that. I agree that\n> > being able to reject easily non-SSL connections in pg_hba.conf is a\n> > bit of a hassle now, but putting more logic into initdb does not seem\n> > the right course to me. Perhaps we could consider an idea like\n> > Peter's to have a sslmode=require on the server side and ease the\n> > generation of HBA rules..\n> > \n> > The patch has stalled for two months now without a rebase provided, so\n> > I am marking it as returned with feedback.\n> \n> Please find attached the rebased patch.\n> \n> Peter's suggestion seems a little more subtle to me than requiring TLS\n> on the server side in that what people generally want to do is\n> disallow clear text connections entirely. In those scenarios, people\n> would also want to set (and be able to change at runtime) some kind of\n> cryptographic policy, as SSH and TLS do. While I see this as a worthy\n> goal, it's a much bigger lift than an optional argument or two to\n> initdb, and requires a lot more discussion than it's had to date.\n\n*sigh*\n\nThis time with patch actually attached.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 30 Dec 2020 20:29:11 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Wed, Dec 30, 2020 at 08:24:06PM +0100, David Fetter wrote:\n>> On Mon, Sep 07, 2020 at 11:57:58AM +0900, Michael Paquier wrote:\n>>> I have looked at the patch of this thread, and I doubt that it is a\n>>> good idea to put more burden into initdb for that. I agree that\n>>> being able to reject easily non-SSL connections in pg_hba.conf is a\n>>> bit of a hassle now, but putting more logic into initdb does not seem\n>>> the right course to me. Perhaps we could consider an idea like\n>>> Peter's to have a sslmode=require on the server side and ease the\n>>> generation of HBA rules..\n\n>> Please find attached the rebased patch.\n>> \n>> Peter's suggestion seems a little more subtle to me than requiring TLS\n>> on the server side in that what people generally want to do is\n>> disallow clear text connections entirely. In those scenarios, people\n>> would also want to set (and be able to change at runtime) some kind of\n>> cryptographic policy, as SSH and TLS do. While I see this as a worthy\n>> goal, it's a much bigger lift than an optional argument or two to\n>> initdb, and requires a lot more discussion than it's had to date.\n\nFWIW, I still agree with what Michael says above. I do not think\nthat adding more options to initdb is a useful solution here.\nIn the first place, it's unlikely that we'll manage to cover many\npeople's exact requirements this way. In the second place, it's\nvery unclear where to stop adding options. In the third place,\nI believe the vast majority of users don't invoke initdb \"by hand\"\nanymore. The typical scenario is to go through a packager-provided\nscript, which almost certainly won't offer access to these additional\noptions. In the fourth place, many people won't know at initdb time\nexactly what they should do, or they'll change their minds later.\n\nThe last two points suggest that what'd be more useful is some sort\nof tool to modify an existing pg_hba.conf file. Or maybe even just\naudit a file to see if it implements $desired-policy, such as\n\"no unencrypted network connections\" or \"no plaintext passwords\".\n(I kind of like the auditing-tool approach; it seems less scary\nthan something that actually rewrites the file.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Dec 2020 15:00:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Wed, 30 Dec 2020 at 15:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> In the third place,\n> I believe the vast majority of users don't invoke initdb \"by hand\"\n> anymore. The typical scenario is to go through a packager-provided\n> script, which almost certainly won't offer access to these additional\n> options.\n\n\nI can't speak to other distributions, but on Ubuntu pg_createcluster allows\na -- followed by initdb options. So at least on Ubuntu any additional\noptions will indeed be available to everybody. I would hope that other\ndistributions have the same capability.\n\nI for one would like to be able to tell initdb (pg_createcluster) what to\nput in the first column of pb_hba.conf in the same way I can already use\n--auth{,-host,-local}= to set the auth-method column. Ideally, for simple\nsituations (think testing scripts and the like, rather than long-term\ninstallations) the pg_hba.conf could be created by initdb and not changed\nafter that.\n\nOn Wed, 30 Dec 2020 at 15:00, Tom Lane <tgl@sss.pgh.pa.us> wrote: In the third place,\nI believe the vast majority of users don't invoke initdb \"by hand\"\nanymore. The typical scenario is to go through a packager-provided\nscript, which almost certainly won't offer access to these additional\noptions.I can't speak to other distributions, but on Ubuntu pg_createcluster allows a -- followed by initdb options. So at least on Ubuntu any additional options will indeed be available to everybody. I would hope that other distributions have the same capability.I for one would like to be able to tell initdb (pg_createcluster) what to put in the first column of pb_hba.conf in the same way I can already use --auth{,-host,-local}= to set the auth-method column. Ideally, for simple situations (think testing scripts and the like, rather than long-term installations) the pg_hba.conf could be created by initdb and not changed after that.",
"msg_date": "Wed, 30 Dec 2020 15:17:47 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 03:00:17PM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > On Wed, Dec 30, 2020 at 08:24:06PM +0100, David Fetter wrote:\n> >> On Mon, Sep 07, 2020 at 11:57:58AM +0900, Michael Paquier wrote:\n> >>> I have looked at the patch of this thread, and I doubt that it is a\n> >>> good idea to put more burden into initdb for that. I agree that\n> >>> being able to reject easily non-SSL connections in pg_hba.conf is a\n> >>> bit of a hassle now, but putting more logic into initdb does not seem\n> >>> the right course to me. Perhaps we could consider an idea like\n> >>> Peter's to have a sslmode=require on the server side and ease the\n> >>> generation of HBA rules..\n> \n> >> Please find attached the rebased patch.\n> >> \n> >> Peter's suggestion seems a little more subtle to me than requiring TLS\n> >> on the server side in that what people generally want to do is\n> >> disallow clear text connections entirely. In those scenarios, people\n> >> would also want to set (and be able to change at runtime) some kind of\n> >> cryptographic policy, as SSH and TLS do. While I see this as a worthy\n> >> goal, it's a much bigger lift than an optional argument or two to\n> >> initdb, and requires a lot more discussion than it's had to date.\n> \n> FWIW, I still agree with what Michael says above. I do not think\n> that adding more options to initdb is a useful solution here.\n> In the first place, it's unlikely that we'll manage to cover many\n> people's exact requirements this way. In the second place, it's\n> very unclear where to stop adding options. In the third place,\n> I believe the vast majority of users don't invoke initdb \"by hand\"\n> anymore. The typical scenario is to go through a packager-provided\n> script, which almost certainly won't offer access to these additional\n> options. In the fourth place, many people won't know at initdb time\n> exactly what they should do, or they'll change their minds later.\n\nTo that last, I suspect that there are folks in regulated industries\nwho want to be able to show that they've deployed at some kind of\nminimal level of protection. If there's not a window during which a\nnon-conforming pg_hba.conf is in play, that's easier to do.\n\n> The last two points suggest that what'd be more useful is some sort\n> of tool to modify an existing pg_hba.conf file. Or maybe even just\n> audit a file to see if it implements $desired-policy, such as\n> \"no unencrypted network connections\" or \"no plaintext passwords\".\n> (I kind of like the auditing-tool approach; it seems less scary\n> than something that actually rewrites the file.)\n\nAm I understanding correctly that you're suggesting we write up a\nformal specification of pg_hba.conf? That could be handy if we don't\nchoose to export the parser the backend uses for it, for example\nbecause we want it to respond super quickly to HUPs, which might\nconflict with making it usable by things that weren't the backend.\n\nI agree that anything that does a write to pg_hba.conf needs to be\napproached with a good deal of caution. Audit tools such as you\npropose could spit out a suggestion that doesn't overwrite, although\nit could get a little hairy if it's intended to patch something that\nhas an include directive in it.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 30 Dec 2020 22:07:49 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 9:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Fetter <david@fetter.org> writes:\n> > On Wed, Dec 30, 2020 at 08:24:06PM +0100, David Fetter wrote:\n> >> On Mon, Sep 07, 2020 at 11:57:58AM +0900, Michael Paquier wrote:\n> >>> I have looked at the patch of this thread, and I doubt that it is a\n> >>> good idea to put more burden into initdb for that. I agree that\n> >>> being able to reject easily non-SSL connections in pg_hba.conf is a\n> >>> bit of a hassle now, but putting more logic into initdb does not seem\n> >>> the right course to me. Perhaps we could consider an idea like\n> >>> Peter's to have a sslmode=require on the server side and ease the\n> >>> generation of HBA rules..\n>\n> >> Please find attached the rebased patch.\n> >>\n> >> Peter's suggestion seems a little more subtle to me than requiring TLS\n> >> on the server side in that what people generally want to do is\n> >> disallow clear text connections entirely. In those scenarios, people\n> >> would also want to set (and be able to change at runtime) some kind of\n> >> cryptographic policy, as SSH and TLS do. While I see this as a worthy\n> >> goal, it's a much bigger lift than an optional argument or two to\n> >> initdb, and requires a lot more discussion than it's had to date.\n>\n> FWIW, I still agree with what Michael says above. I do not think\n> that adding more options to initdb is a useful solution here.\n> In the first place, it's unlikely that we'll manage to cover many\n> people's exact requirements this way. In the second place, it's\n> very unclear where to stop adding options. In the third place,\n> I believe the vast majority of users don't invoke initdb \"by hand\"\n> anymore. The typical scenario is to go through a packager-provided\n> script, which almost certainly won't offer access to these additional\n> options. In the fourth place, many people won't know at initdb time\n> exactly what they should do, or they'll change their minds later.\n>\n\nAFAIK bot the debian/ubuntu script mentioned by Isaac downthread, and the\nRedHat/Fedora ones do allow you to specify inidb options. That would cover\nthe majority I'd say...\n\nThat said, I agree with not adding it as an option to initdb. You'll\nquickly get to the point where you specify the whole pg_hba file on the\ncommandline to initdb -- and most people today who actually care that much\nabout it would have their pg_hba.conf file under some sort of configuration\nmanagement anyway, whether it's ansible, chef, puppet or something else.\n\n\nThe last two points suggest that what'd be more useful is some sort\n> of tool to modify an existing pg_hba.conf file. Or maybe even just\n>\n\nI don't think we need, or indeed want, a tool to *modify* pg_hba.conf. For\npeople who want that, there are already plenty options out there in the\nconfiguration management space, let's not invent our own.\n\n\n\n> audit a file to see if it implements $desired-policy, such as\n> \"no unencrypted network connections\" or \"no plaintext passwords\".\n> (I kind of like the auditing-tool approach; it seems less scary\n> than something that actually rewrites the file.)\n>\n\nAudiring, however, is a lot more interesting.\n\nFor people who actually care about most of this, it's not that important\nwhat the initial one is, if it can trivially be changed to become insecure.\nAnd unfortunately due to the complexity of pg_hba, that can easily happen.\nKeeping it under configuration management helps with that, but doesn't\nentirely solve the problem.\n\nAnother possible approach could be to add global gucs for\n\"allow_unencrypted_connections\" and maybe\n\"available_authentication_methods\". That would override pg_hba. At least in\ndoing so, there would be *one* spot where one could fairly strictly lock\nthings down. (Similar to Peters suggestion upthread)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Dec 30, 2020 at 9:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Fetter <david@fetter.org> writes:\n> On Wed, Dec 30, 2020 at 08:24:06PM +0100, David Fetter wrote:\n>> On Mon, Sep 07, 2020 at 11:57:58AM +0900, Michael Paquier wrote:\n>>> I have looked at the patch of this thread, and I doubt that it is a\n>>> good idea to put more burden into initdb for that. I agree that\n>>> being able to reject easily non-SSL connections in pg_hba.conf is a\n>>> bit of a hassle now, but putting more logic into initdb does not seem\n>>> the right course to me. Perhaps we could consider an idea like\n>>> Peter's to have a sslmode=require on the server side and ease the\n>>> generation of HBA rules..\n\n>> Please find attached the rebased patch.\n>> \n>> Peter's suggestion seems a little more subtle to me than requiring TLS\n>> on the server side in that what people generally want to do is\n>> disallow clear text connections entirely. In those scenarios, people\n>> would also want to set (and be able to change at runtime) some kind of\n>> cryptographic policy, as SSH and TLS do. While I see this as a worthy\n>> goal, it's a much bigger lift than an optional argument or two to\n>> initdb, and requires a lot more discussion than it's had to date.\n\nFWIW, I still agree with what Michael says above. I do not think\nthat adding more options to initdb is a useful solution here.\nIn the first place, it's unlikely that we'll manage to cover many\npeople's exact requirements this way. In the second place, it's\nvery unclear where to stop adding options. In the third place,\nI believe the vast majority of users don't invoke initdb \"by hand\"\nanymore. The typical scenario is to go through a packager-provided\nscript, which almost certainly won't offer access to these additional\noptions. In the fourth place, many people won't know at initdb time\nexactly what they should do, or they'll change their minds later.AFAIK bot the debian/ubuntu script mentioned by Isaac downthread, and the RedHat/Fedora ones do allow you to specify inidb options. That would cover the majority I'd say...That said, I agree with not adding it as an option to initdb. You'll quickly get to the point where you specify the whole pg_hba file on the commandline to initdb -- and most people today who actually care that much about it would have their pg_hba.conf file under some sort of configuration management anyway, whether it's ansible, chef, puppet or something else. \nThe last two points suggest that what'd be more useful is some sort\nof tool to modify an existing pg_hba.conf file. Or maybe even justI don't think we need, or indeed want, a tool to *modify* pg_hba.conf. For people who want that, there are already plenty options out there in the configuration management space, let's not invent our own. \naudit a file to see if it implements $desired-policy, such as\n\"no unencrypted network connections\" or \"no plaintext passwords\".\n(I kind of like the auditing-tool approach; it seems less scary\nthan something that actually rewrites the file.)Audiring, however, is a lot more interesting.For people who actually care about most of this, it's not that important what the initial one is, if it can trivially be changed to become insecure. And unfortunately due to the complexity of pg_hba, that can easily happen. Keeping it under configuration management helps with that, but doesn't entirely solve the problem.Another possible approach could be to add global gucs for \"allow_unencrypted_connections\" and maybe \"available_authentication_methods\". That would override pg_hba. At least in doing so, there would be *one* spot where one could fairly strictly lock things down. (Similar to Peters suggestion upthread)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 1 Jan 2021 14:12:06 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On 01.01.21 14:12, Magnus Hagander wrote:\n> That said, I agree with not adding it as an option to initdb. You'll \n> quickly get to the point where you specify the whole pg_hba file on the \n> commandline to initdb -- and most people today who actually care that \n> much about it would have their pg_hba.conf file under some sort of \n> configuration management anyway, whether it's ansible, chef, puppet or \n> something else.\n\nI think there is enough sustained opposition to this patch that we can \nmark this as rejected in the commitfest.\n\n\n",
"msg_date": "Wed, 3 Mar 2021 15:07:30 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On 3/3/21 9:07 AM, Peter Eisentraut wrote:\n> On 01.01.21 14:12, Magnus Hagander wrote:\n>> That said, I agree with not adding it as an option to initdb. You'll \n>> quickly get to the point where you specify the whole pg_hba file on \n>> the commandline to initdb -- and most people today who actually care \n>> that much about it would have their pg_hba.conf file under some sort \n>> of configuration management anyway, whether it's ansible, chef, puppet \n>> or something else.\n> \n> I think there is enough sustained opposition to this patch that we can \n> mark this as rejected in the commitfest.\n\nAgreed. I will do that on MAR 8 (to leave time for discussion) unless \nsomebody beats me to it.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 3 Mar 2021 09:52:19 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 03:07:30PM +0100, Peter Eisentraut wrote:\n> I think there is enough sustained opposition to this patch that we can mark\n> this as rejected in the commitfest.\n\n+1.\n--\nMichael",
"msg_date": "Thu, 4 Mar 2021 11:25:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 7:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Mar 03, 2021 at 03:07:30PM +0100, Peter Eisentraut wrote:\n> > I think there is enough sustained opposition to this patch that we can\n> mark\n> > this as rejected in the commitfest.\n>\n> +1.\n> --\n> Michael\n>\n\nThe patch (v5-0001-Enable-setting-pg_hba.conf-permissions-from-initd.patch)\ndoes not apply successfully.\nThere are two reasons first is it was not generated with proper \"-p\" which\nconfused cfbot. Second, after\nfixing that issue you still need to rebase that.\n\n\nhttp://cfbot.cputube.org/patch_32_2916.log\n\n|+++ doc/src/sgml/ref/initdb.sgml\n--------------------------\nNo file to patch. Skipping patch.\n1 out of 1 hunk ignored\ncan't find file to patch at input line 77\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n\n\nCan we get a rebase?\n\nI am marking the patch \"Waiting on Author\"\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Mar 4, 2021 at 7:25 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Mar 03, 2021 at 03:07:30PM +0100, Peter Eisentraut wrote:\n> I think there is enough sustained opposition to this patch that we can mark\n> this as rejected in the commitfest.\n\n+1.\n--\nMichael\nThe patch (v5-0001-Enable-setting-pg_hba.conf-permissions-from-initd.patch) does not apply successfully.There are two reasons first is it was not generated with proper \"-p\" which confused cfbot. Second, afterfixing that issue you still need to rebase that.http://cfbot.cputube.org/patch_32_2916.log|+++ doc/src/sgml/ref/initdb.sgml--------------------------No file to patch. Skipping patch.1 out of 1 hunk ignoredcan't find file to patch at input line 77Perhaps you used the wrong -p or --strip option?The text leading up to this was:Can we get a rebase? I am marking the patch \"Waiting on Author\"-- Ibrar Ahmed",
"msg_date": "Mon, 8 Mar 2021 21:23:58 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "\nOn 3/8/21 11:23 AM, Ibrar Ahmed wrote:\n>\n>\n> On Thu, Mar 4, 2021 at 7:25 AM Michael Paquier <michael@paquier.xyz\n> <mailto:michael@paquier.xyz>> wrote:\n>\n> On Wed, Mar 03, 2021 at 03:07:30PM +0100, Peter Eisentraut wrote:\n> > I think there is enough sustained opposition to this patch that\n> we can mark\n> > this as rejected in the commitfest.\n>\n> +1.\n> --\n> Michael\n>\n>\n> The patch\n> (v5-0001-Enable-setting-pg_hba.conf-permissions-from-initd.patch) does\n> not apply successfully.\n> There are two reasons first is it was not generated with proper \"-p\"\n> which confused cfbot. Second, after\n> fixing that issue you still need to rebase that.\n>\n>\n> http://cfbot.cputube.org/patch_32_2916.log\n> <http://cfbot.cputube.org/patch_32_2916.log>\n>\n> |+++ doc/src/sgml/ref/initdb.sgml\n> --------------------------\n> No file to patch. Skipping patch.\n> 1 out of 1 hunk ignored\n> can't find file to patch at input line 77\n> Perhaps you used the wrong -p or --strip option?\n> The text leading up to this was:\n>\n>\n> Can we get a rebase?\n>\n> I am marking the patch \"Waiting on Author\"\n\n\n\nWhat is the point of doing that if we're going to reject the patch as\ndiscussed upthread?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 18:13:14 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
},
{
"msg_contents": "On Mon, Mar 08, 2021 at 06:13:14PM -0500, Andrew Dunstan wrote:\n> What is the point of doing that if we're going to reject the patch as\n> discussed upthread?\n\nI have read again this thread, and still understand that this is the\nconsensus that has been reached. The CF entry has been updated to\nreflect that.\n--\nMichael",
"msg_date": "Tue, 9 Mar 2021 09:47:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Let people set host(no)ssl settings from initdb"
}
] |
[
{
"msg_contents": "Here are patches to add support for Unicode normalization into SQL, per \nSQL standard:\n\n normalize($string [, form])\n $string is [form] normalized\n\n(comment about silly SQL syntax here)\n\nWe already have all the infrastructure for Unicode normalization for the \nSASLprep functionality. The first patch extends the internal APIs to \nsupport all four normal forms instead of only NFKC used by SASLprep. \nThe second patch adds the SQL layer on top of it.\n\nThis could be used to preprocess or check strings before using them with \ndeterministic collations or locale implementations that don't deal with \nnon-NFC data correctly, perhaps using triggers, generated columns, or \ndomains. The NFKC and NFKD normalizations could also be used for \ngeneral data cleaning, similar to what SASLprep does.\n\nAs a future idea, I think we could also hook Unicode normalization into \nthe protocol-level encoding conversion.\n\nAlso, there is a way to optimize the \"is normalized\" test for common \ncases, described in UTR #15. For that we'll need an additional data \nfile from Unicode. In order to simplify that, I would like my patch \n\"Add support for automatically updating Unicode derived files\" \nintegrated first.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 12 Dec 2019 12:46:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Unicode normalization SQL functions"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> Also, there is a way to optimize the \"is normalized\" test for common \n> cases, described in UTR #15. For that we'll need an additional data \n> file from Unicode. In order to simplify that, I would like my patch \n> \"Add support for automatically updating Unicode derived files\" \n> integrated first.\n\nWould that explain that the NFC/NFKC normalization and \"is normalized\"\ncheck seem abnormally slow with the current patch, or should\nit be regarded independently of the other patch?\n\nFor instance, testing 10000 short ASCII strings:\n\npostgres=# select count(*) from (select md5(i::text) as t from\ngenerate_series(1,10000) as i) s where t is nfc normalized ;\n count \n-------\n 10000\n(1 row)\n\nTime: 2573,859 ms (00:02,574)\n\nBy comparison, the NFD/NFKD case is faster by two orders of magnitude:\n\npostgres=# select count(*) from (select md5(i::text) as t from\ngenerate_series(1,10000) as i) s where t is nfd normalized ;\n count \n-------\n 10000\n(1 row)\n\nTime: 29,962 ms\n\nAlthough NFC/NFKC has a recomposition step that NFD/NFKD\ndoesn't have, such a difference is surprising.\n\nI've tried an alternative implementation based on ICU's\nunorm2_isNormalized() /unorm2_normalize() functions (which I'm\ncurrently adding to the icu_ext extension to be exposed in SQL).\nWith these, the 4 normal forms are in the 20ms ballpark with the above\ntest case, without a clear difference between composed and decomposed\nforms.\n\nIndependently of the performance, I've compared the results\nof the ICU implementation vs this patch on large series of strings\nwith all normal forms and could not find any difference.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 06 Jan 2020 17:00:11 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-01-06 17:00, Daniel Verite wrote:\n> \tPeter Eisentraut wrote:\n> \n>> Also, there is a way to optimize the \"is normalized\" test for common\n>> cases, described in UTR #15. For that we'll need an additional data\n>> file from Unicode. In order to simplify that, I would like my patch\n>> \"Add support for automatically updating Unicode derived files\"\n>> integrated first.\n> \n> Would that explain that the NFC/NFKC normalization and \"is normalized\"\n> check seem abnormally slow with the current patch, or should\n> it be regarded independently of the other patch?\n\nThat's unrelated.\n\n> For instance, testing 10000 short ASCII strings:\n> \n> postgres=# select count(*) from (select md5(i::text) as t from\n> generate_series(1,10000) as i) s where t is nfc normalized ;\n> count\n> -------\n> 10000\n> (1 row)\n> \n> Time: 2573,859 ms (00:02,574)\n> \n> By comparison, the NFD/NFKD case is faster by two orders of magnitude:\n> \n> postgres=# select count(*) from (select md5(i::text) as t from\n> generate_series(1,10000) as i) s where t is nfd normalized ;\n> count\n> -------\n> 10000\n> (1 row)\n> \n> Time: 29,962 ms\n> \n> Although NFC/NFKC has a recomposition step that NFD/NFKD\n> doesn't have, such a difference is surprising.\n\nIt's very likely that this is because the recomposition calls \nrecompose_code() which does a sequential scan of UnicodeDecompMain for \neach character. To optimize that, we should probably build a bespoke \nreverse mapping table that can be accessed more efficiently.\n\n> I've tried an alternative implementation based on ICU's\n> unorm2_isNormalized() /unorm2_normalize() functions (which I'm\n> currently adding to the icu_ext extension to be exposed in SQL).\n> With these, the 4 normal forms are in the 20ms ballpark with the above\n> test case, without a clear difference between composed and decomposed\n> forms.\n\nThat's good feedback.\n\n> Independently of the performance, I've compared the results\n> of the ICU implementation vs this patch on large series of strings\n> with all normal forms and could not find any difference.\n\nAnd that too.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 10:20:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "Here is an updated patch set that now also implements the \"quick check\" \nalgorithm from UTR #15 for making IS NORMALIZED very fast in many cases, \nwhich I had mentioned earlier in the thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 20 Jan 2020 10:17:50 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> Here is an updated patch set that now also implements the \"quick check\" \n> algorithm from UTR #15 for making IS NORMALIZED very fast in many cases, \n> which I had mentioned earlier in the thread.\n\nI found a bug in unicode_is_normalized_quickcheck() which is\ntriggered when the last codepoint of the string is beyond\nU+10000. On encountering it, it does:\n+\t\tif (is_supplementary_codepoint(ch))\n+\t\t\tp++;\nWhen ch is the last codepoint, it makes p point to\nthe ending zero, but the subsequent p++ done by\nthe for loop makes it miss the exit and go into over-reading.\n\nBut anyway, what's the reason for skipping the codepoint\nfollowing a codepoint outside of the BMP?\nI've figured that it comes from porting the Java code in UAX#15:\n\npublic int quickCheck(String source) {\n short lastCanonicalClass = 0;\n int result = YES;\n for (int i = 0; i < source.length(); ++i) {\n\tint ch = source.codepointAt(i);\n\tif (Character.isSupplementaryCodePoint(ch)) ++i;\n\tshort canonicalClass = getCanonicalClass(ch);\n\tif (lastCanonicalClass > canonicalClass && canonicalClass != 0) {\n\t return NO;\t }\n\tint check = isAllowed(ch);\n\tif (check == NO) return NO;\n\tif (check == MAYBE) result = MAYBE;\n\tlastCanonicalClass = canonicalClass;\n }\n return result;\n}\n\nsource.length() is the length in UTF-16 code units, in which a surrogate\npair counts for 2. This would be why it does\n if (Character.isSupplementaryCodePoint(ch)) ++i;\nit's meant to skip the 2nd UTF-16 code of the pair.\nAs this does not apply to the 32-bit pg_wchar, I think the two lines above\nin the C implementation should just be removed.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 28 Jan 2020 10:48:45 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-01-28 10:48, Daniel Verite wrote:\n> I found a bug in unicode_is_normalized_quickcheck() which is\n> triggered when the last codepoint of the string is beyond\n> U+10000. On encountering it, it does:\n> +\t\tif (is_supplementary_codepoint(ch))\n> +\t\t\tp++;\n> When ch is the last codepoint, it makes p point to\n> the ending zero, but the subsequent p++ done by\n> the for loop makes it miss the exit and go into over-reading.\n> \n> But anyway, what's the reason for skipping the codepoint\n> following a codepoint outside of the BMP?\n\nYou're right, this didn't make any sense. Here is a new patch set with \nthat fixed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 28 Jan 2020 21:21:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 1/28/20 9:21 PM, Peter Eisentraut wrote:\n> You're right, this didn't make any sense.ᅵ Here is a new patch set with \n> that fixed.\n\nThanks for this patch. This is a feature which has been on my personal \ntodo list for a while and something which I have wished to have a couple \nof times.\n\nI took a quick look at the patch and here is some feedback:\n\nA possible concern is increased binary size from the new tables for the \nquickcheck but personally I think they are worth it.\n\nA potential optimization would be to merge utf8_to_unicode() and \npg_utf_mblen() into one function in unicode_normalize_func() since \nutf8_to_unicode() already knows length of the character. Probably not \nworth it though.\n\nIt feels a bit wasteful to measure output_size in \nunicode_is_normalized() since unicode_normalize() actually already knows \nthe length of the buffer, it just does not return it.\n\nA potential optimization for the normalized case would be to abort the \nquick check on the first maybe and normalize from that point on only. If \nI can find the time I might try this out and benchmark it.\n\nNitpick: \"split/\\s*;\\s*/, $line\" in generate-unicode_normprops_table.pl \nshould be \"split /\\s*;\\s*/, $line\".\n\nWhat about using else if in the code below for clarity?\n\n+\t\tif (check == UNICODE_NORM_QC_NO)\n+\t\t\treturn UNICODE_NORM_QC_NO;\n+\t\tif (check == UNICODE_NORM_QC_MAYBE)\n+\t\t\tresult = UNICODE_NORM_QC_MAYBE;\n\nRemove extra space in the line below.\n\n+\telse if (quickcheck == UNICODE_NORM_QC_NO )\n\nAndreas\n\n\n",
"msg_date": "Thu, 13 Feb 2020 01:23:41 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On Thu, Feb 13, 2020 at 01:23:41AM +0100, Andreas Karlsson wrote:\n> On 1/28/20 9:21 PM, Peter Eisentraut wrote:\n>> You're right, this didn't make any sense. Here is a new patch set with\n>> that fixed.\n> \n> Thanks for this patch. This is a feature which has been on my personal todo\n> list for a while and something which I have wished to have a couple of\n> times.\n\n(The size of the patch set may justify compressing it)\n--\nMichael",
"msg_date": "Thu, 13 Feb 2020 12:40:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-02-13 01:23, Andreas Karlsson wrote:\n> A potential optimization would be to merge utf8_to_unicode() and\n> pg_utf_mblen() into one function in unicode_normalize_func() since\n> utf8_to_unicode() already knows length of the character. Probably not\n> worth it though.\n\nThis would also require untangling the entire encoding API.\n\n> It feels a bit wasteful to measure output_size in\n> unicode_is_normalized() since unicode_normalize() actually already knows\n> the length of the buffer, it just does not return it.\n\nSure, but really most string APIs work like that. They surely know the \nstring length internally, but afterwards you often have to call strlen() \nagain.\n\n> A potential optimization for the normalized case would be to abort the\n> quick check on the first maybe and normalize from that point on only. If\n> I can find the time I might try this out and benchmark it.\n\nAre you sure this would always be valid? The fact that this wasn't \nmentioned in UTR #15 makes me suspicious.\n\n> Nitpick: \"split/\\s*;\\s*/, $line\" in generate-unicode_normprops_table.pl\n> should be \"split /\\s*;\\s*/, $line\".\n\ndone\n\n> What about using else if in the code below for clarity?\n> \n> +\t\tif (check == UNICODE_NORM_QC_NO)\n> +\t\t\treturn UNICODE_NORM_QC_NO;\n> +\t\tif (check == UNICODE_NORM_QC_MAYBE)\n> +\t\t\tresult = UNICODE_NORM_QC_MAYBE;\n\ndone\n\n> Remove extra space in the line below.\n> \n> +\telse if (quickcheck == UNICODE_NORM_QC_NO )\n\nI didn't find this in my local copy.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Feb 2020 16:56:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": " Hi,\n\nI've checked the v3 patch against the results of the normalization\ndone by ICU [1] on my test data again, and they're identical\n(as they were with v1; v2 had the bug discussed upthread, now fixed).\n\nConcerning execution speed, there's an excessive CPU usage when\nnormalizing into NFC or NFKC. Looking at pre-existing code, it looks\nlike recompose_code() in unicode_norm.c looping over the\nUnicodeDecompMain array might be very costly.\n\nAnother point is that the ICU-based implementation appears\nto be significantly faster in all cases, which makes me wonder\nwhy ICU builds should not just use ICU instead of the PG-core\nimplementation.\nTo illustrate this, here are the execution times reported by psql for\nthe queries below exercising the normalization code, both with the\nfunctions provided by the patch and with the equivalent functions\nimplemented with ICU.\nThe dataset is ~10 million unique short strings\nextracted from real data, and the number is a median execution time in\nmillisecs, for 10 successive runs with query parallelism off\n(stddev in parentheses).\n\n operation |\t core\t |\ticu \n------------+--------------+-----------\n nfc check | 4398 (20) | 3088 (27)\n nfc conv | 771502 (414) | 5503 (19)\n nfd check | 4510 (10) | 2898 (8)\n nfd conv | 9102 (1)\t | 5569 (6)\n nfkc check | 4825 (51) | 3273 (4)\n nfkc conv | 772240 (340) | 5763 (8)\n nfkd check | 4794 (4)\t | 3170 (39)\n nfkd conv | 9229 (4)\t | 5824 (9)\n\nThe queries:\n\ncheck w/core:\n select count(*) from words where w is $NORM normalized;\n\nconversion w/core:\n select sum(length(normalize(w, $NORM))) from words;\n\ncheck w/icu:\n select count(*) from words where icu_is_normalized(w, '$NORM');\n\nconversion w/icu:\n select sum(length(icu_normalize(w, '$NORM'))) from words;\n\n\n[1] https://github.com/dverite/icu_ext/blob/master/icu_normalize.c\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 17 Feb 2020 20:08:00 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": " One nitpick:\n\nAround this hunk:\n\n- * unicode_normalize_kc - Normalize a Unicode string to NFKC form.\n+ * unicode_normalize - Normalize a Unicode string to the specified form.\n *\n * The input is a 0-terminated array of codepoints.\n *\n@@ -304,8 +306,10 @@ decompose_code(pg_wchar code, pg_wchar **result, int\n*current)\n * string is palloc'd instead, and OOM is reported with ereport().\n */\n\n\n\nThe comment in full says:\n\n/*\t\t\t\t\t\t\t\t\t \n * unicode_normalize - Normalize a Unicode string to the specified form. \n *\t\t\t\t\t\t\t\t\t \n * The input is a 0-terminated array of codepoints.\t\t\t \n *\t\t\t\t\t\t\t\t\t \n * In frontend, returns a 0-terminated array of codepoints, allocated with \n * malloc. Or NULL if we run out of memory. In frontend, the returned\t \n * string is palloc'd instead, and OOM is reported with ereport().\t \n */\n\nIt looks like the 2nd occurrence of \"frontend\" was meant to be \"backend\".\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 17 Feb 2020 20:14:03 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-02-17 20:14, Daniel Verite wrote:\n> The comment in full says:\n> \n> /*\t\t\t\t\t\t\t\t\t\n> * unicode_normalize - Normalize a Unicode string to the specified form.\n> *\t\t\t\t\t\t\t\t\t\n> * The input is a 0-terminated array of codepoints.\t\t\t\n> *\t\t\t\t\t\t\t\t\t\n> * In frontend, returns a 0-terminated array of codepoints, allocated with\n> * malloc. Or NULL if we run out of memory. In frontend, the returned\t\n> * string is palloc'd instead, and OOM is reported with ereport().\t\n> */\n> \n> It looks like the 2nd occurrence of \"frontend\" was meant to be \"backend\".\n\nThis was a pre-existing problem, so I have fixed that separately.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Feb 2020 21:09:42 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-02-17 20:08, Daniel Verite wrote:\n> Concerning execution speed, there's an excessive CPU usage when\n> normalizing into NFC or NFKC. Looking at pre-existing code, it looks\n> like recompose_code() in unicode_norm.c looping over the\n> UnicodeDecompMain array might be very costly.\n\nYes, this is a known issue and I think room for future optimization work.\n\n> Another point is that the ICU-based implementation appears\n> to be significantly faster in all cases, which makes me wonder\n> why ICU builds should not just use ICU instead of the PG-core\n> implementation.\n\nThat would require linking libpq to ICU (for SASLprep), and in general \nwould either make ICU required or require maintaining multiple \nimplementations. I don't think we're there yet.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Feb 2020 21:23:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "What is that status of this patch set? I think we have nailed down the \nbehavior, but there were some concerns about certain performance \ncharacteristics. Do people feel that those are required to be addressed \nin this cycle?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Mar 2020 15:41:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 3/19/20 3:41 PM, Peter Eisentraut wrote:\n> What is that status of this patch set?ᅵ I think we have nailed down the \n> behavior, but there were some concerns about certain performance \n> characteristics.ᅵ Do people feel that those are required to be addressed \n> in this cycle?\n\nPersonally I would rather see it merged if the code is correct (which it \nseems like it is from what I can tell) as the performance seems to be \ngood enough for it to be useful.\n\nUnicode normalization is a feature which I have wished and at least for \nmy use cases the current implementation is more than fast enough.\n\nAndreas\n\n\n",
"msg_date": "Thu, 19 Mar 2020 18:01:12 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> What is that status of this patch set? I think we have nailed down the \n> behavior, but there were some concerns about certain performance \n> characteristics. Do people feel that those are required to be addressed \n> in this cycle?\n\nNot finding any other issue with v3 or objections in the thread,\nI've set the status to Ready For Committer in the CF.\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 23 Mar 2020 17:26:20 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-03-23 17:26, Daniel Verite wrote:\n> \tPeter Eisentraut wrote:\n> \n>> What is that status of this patch set? I think we have nailed down the\n>> behavior, but there were some concerns about certain performance\n>> characteristics. Do people feel that those are required to be addressed\n>> in this cycle?\n> \n> Not finding any other issue with v3 or objections in the thread,\n> I've set the status to Ready For Committer in the CF.\n\nI have committed the 0001 patch.\n\nNow I have some concerns about the size of the new table in \nunicode_normprops_table.h, and the resulting binary size. At the very \nleast, we should probably make that #ifndef FRONTEND or something like \nthat so libpq isn't bloated by it unnecessarily. Perhaps there is a \nbetter format for that table? Any ideas?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Mar 2020 10:20:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-03-24 10:20, Peter Eisentraut wrote:\n> Now I have some concerns about the size of the new table in\n> unicode_normprops_table.h, and the resulting binary size. At the very\n> least, we should probably make that #ifndef FRONTEND or something like\n> that so libpq isn't bloated by it unnecessarily. Perhaps there is a\n> better format for that table? Any ideas?\n\nI have figured this out. New patch is attached.\n\nFirst, I have added #ifndef FRONTEND, as mentioned above, so libpq isn't \nbloated. Second, I have changed the lookup structure to a bitfield, so \neach entry is only 32 bits instead of 64. Third, I have dropped the \nquickcheck tables for the NFD and NFKD forms. Those are by far the \nbiggest tables, and you still get okay performance if you do the \nnormalization check the long way, since we don't need the recomposition \nstep on those cases, which is by far the slowest part. The main use \ncase of all of this, I expect, is to check for NFC normalization, so \nit's okay if the other variants are not optimized to the same extent.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 26 Mar 2020 08:25:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 3:26 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I have figured this out. New patch is attached.\n>\n> First, I have added #ifndef FRONTEND, as mentioned above, so libpq isn't\n> bloated. Second, I have changed the lookup structure to a bitfield, so\n> each entry is only 32 bits instead of 64. Third, I have dropped the\n> quickcheck tables for the NFD and NFKD forms. Those are by far the\n> biggest tables, and you still get okay performance if you do the\n> normalization check the long way, since we don't need the recomposition\n> step on those cases, which is by far the slowest part. The main use\n> case of all of this, I expect, is to check for NFC normalization, so\n> it's okay if the other variants are not optimized to the same extent.\n\nReading the link cited in the patch\n\nhttp://www.unicode.org/reports/tr15/#Detecting_Normalization_Forms\n\n\"The data for the implementation of the isAllowed() call can be\naccessed in memory with a hash table or a trie (see Section 14,\nImplementation Notes); the latter will be the fastest.\"\n\nWe don't have a trie implementation in Postgres, but we do have a\nperfect hash implementation. Doing that would bring the tables back to\n64 bits per entry, but would likely be noticeably faster than binary\nsearch. Since v4 has left out the biggest tables entirely, I think\nthis might be worth a look for the smaller tables remaining.\n\nIn the attached v5, when building the hash tables, we sort the code\npoints by NO/MAYBE, and store the index of the beginning of the NO\nblock:\n\nMMMNNNNNNNNN\n~~~^\n\nThat way we can tell a NO from a MAYBE by testing the result of the hash lookup.\n\nRegression tests pass, but I haven't measured performance yet. I had\nto fiddle with the hash seeds a bit to get the larger table to build.\n\nAlso, if we go with v4, I noticed the following test is present twice:\n\n+SELECT \"normalize\"('abc', 'def'); -- run-time error\n\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Mar 2020 01:41:43 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "I wrote:\n>\n> Regression tests pass, but I haven't measured performance yet.\n\nUsing a test similar to one upthread:\n\nselect count(*) from (select md5(i::text) as t from\ngenerate_series(1,100000) as i) s where t is nfc normalized ;\n\nI get (median of three)\nv4 419ms\nv5 310ms\n\nwith binary size\nv4 HEAD + 33kB\nv5 HEAD + 57kB\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:16:12 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-03-26 08:25, Peter Eisentraut wrote:\n> On 2020-03-24 10:20, Peter Eisentraut wrote:\n>> Now I have some concerns about the size of the new table in\n>> unicode_normprops_table.h, and the resulting binary size. At the very\n>> least, we should probably make that #ifndef FRONTEND or something like\n>> that so libpq isn't bloated by it unnecessarily. Perhaps there is a\n>> better format for that table? Any ideas?\n> \n> I have figured this out. New patch is attached.\n> \n> First, I have added #ifndef FRONTEND, as mentioned above, so libpq isn't\n> bloated. Second, I have changed the lookup structure to a bitfield, so\n> each entry is only 32 bits instead of 64. Third, I have dropped the\n> quickcheck tables for the NFD and NFKD forms. Those are by far the\n> biggest tables, and you still get okay performance if you do the\n> normalization check the long way, since we don't need the recomposition\n> step on those cases, which is by far the slowest part. The main use\n> case of all of this, I expect, is to check for NFC normalization, so\n> it's okay if the other variants are not optimized to the same extent.\n\nI have committed this version.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Apr 2020 09:45:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On 2020-03-26 18:41, John Naylor wrote:\n> We don't have a trie implementation in Postgres, but we do have a\n> perfect hash implementation. Doing that would bring the tables back to\n> 64 bits per entry, but would likely be noticeably faster than binary\n> search. Since v4 has left out the biggest tables entirely, I think\n> this might be worth a look for the smaller tables remaining.\n> \n> In the attached v5, when building the hash tables, we sort the code\n> points by NO/MAYBE, and store the index of the beginning of the NO\n> block:\n\nThis is a valuable idea, but I fear it's a bit late now in this cycle. \nI have questions about some details. For example, you mention that you \nhad to fiddle with the hash seed. How does that affect other users of \nPerfectHash? What happens when we update Unicode data and the hash \ndoesn't work anymore? These discussions might derail this patch at this \nhour, so I have committed the previous patch. We can consider your \npatch as a follow-up patch, either now or in the future.\n\n > Also, if we go with v4, I noticed the following test is present twice:\n >\n > +SELECT \"normalize\"('abc', 'def'); -- run-time error\n\nI think this is correct. The other test is for \"is_normalized\".\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Apr 2020 09:51:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unicode normalization SQL functions"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 3:51 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-03-26 18:41, John Naylor wrote:\n> > We don't have a trie implementation in Postgres, but we do have a\n> > perfect hash implementation. Doing that would bring the tables back to\n> > 64 bits per entry, but would likely be noticeably faster than binary\n> > search. Since v4 has left out the biggest tables entirely, I think\n> > this might be worth a look for the smaller tables remaining.\n> >\n> > In the attached v5, when building the hash tables, we sort the code\n> > points by NO/MAYBE, and store the index of the beginning of the NO\n> > block:\n>\n> This is a valuable idea, but I fear it's a bit late now in this cycle.\n> I have questions about some details. For example, you mention that you\n> had to fiddle with the hash seed. How does that affect other users of\n> PerfectHash?\n\nThey would still try the same multipliers they use now, so no effect on them.\n\n> What happens when we update Unicode data and the hash\n> doesn't work anymore?\n\nThe script would choose different multipliers and/or seeds\nautomatically. Only if you're unlucky would you have to fiddle with\nthe hash parameters again. The last-resort multipliers in the v2 patch\nin the other thread [1] seem very effective and easily build both the\nquick check D tables, which I tried for amusement's sake.\n\nThat said, we could reduce the chances of that happening this way:\nAfter trying all the shift-and-add multipliers, we could add another\nloop to try a bunch of numbers in a range. We'd need a quick check to\nweed out multiples of small primes so the number has a decent chance\nof being prime. To save time, just try a few seeds and move on to the\nnext number. Maybe emit a warning that it exhausted the shift-and-add\nmultipliers in case the developer wanted to intervene.\n\nIf I resubmit this, I would split the build up into two steps: have\nthe current manual script build the quick check array for later commit\ninto the tree, and build the hash function separately from that as a\nMakefile distprep target. There's no reason to have the hash functions\nchecked in as I did in v5, like we don't check in the keyword hash\nfunctions.\n\nI would also consider splitting it into two patches:\n\n1. Keep binary search but with a more abstract array representation\n(think PG_RMGR). This patch would be large but mostly mechanical.\n2. Switch to hash lookup. A smaller one for ease of review.\n\n[1] https://www.postgresql.org/message-id/CACPNZCvMMj88Bsnk1k%3DRffW6gBw%2BFH7wcwCBfcKLDM%3DUEG2UWg%40mail.gmail.com\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Apr 2020 12:01:47 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode normalization SQL functions"
}
] |
[
{
"msg_contents": "Hi,\n\nI find there is a duplicate function call on timestamp2tm in timestamptz_part and timestamp_part.\nIs that necessary? I remove the latter one and it also works.\n\nBest,\nJapin.",
"msg_date": "Thu, 12 Dec 2019 14:34:20 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Duplicate function call on timestamp2tm"
},
{
"msg_contents": "Li Japin <japinli@hotmail.com> writes:\n> I find there is a duplicate function call on timestamp2tm in timestamptz_part and timestamp_part.\n> Is that necessary? I remove the latter one and it also works.\n\nHuh. I do believe you're right. Must be an ancient copy-and-paste\nmistake?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Dec 2019 09:41:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate function call on timestamp2tm"
},
{
"msg_contents": "I wrote:\n> Li Japin <japinli@hotmail.com> writes:\n>> I find there is a duplicate function call on timestamp2tm in timestamptz_part and timestamp_part.\n>> Is that necessary? I remove the latter one and it also works.\n\n> Huh. I do believe you're right. Must be an ancient copy-and-paste\n> mistake?\n\nAh, after looking in the git history, not quite that ancient:\nthis duplication dates to commit 258ee1b63, which moved these\nswitch cases from the \"if (type == RESERV)\" switches in the\nsame functions. In the previous coding these function calls\nwere actually necessary, but here they're redundant. I guess\nthat's just additional ammunition for Greg's point that the\nkeywords were misclassified ;-).\n\nI see from the code coverage report that we're missing coverage\nfor these and some other paths in timestamp[tz]_part. Think\nI'll go add some more test cases while I'm at it.\n\nThanks again for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Dec 2019 10:13:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate function call on timestamp2tm"
},
{
"msg_contents": "Thanks for your confirm. Is there anything I can do?\n\nOn Dec 12, 2019, at 11:13 PM, Tom Lane <tgl@sss.pgh.pa.us<mailto:tgl@sss.pgh.pa.us>> wrote:\n\nAh, after looking in the git history, not quite that ancient:\nthis duplication dates to commit 258ee1b63, which moved these\nswitch cases from the \"if (type == RESERV)\" switches in the\nsame functions. In the previous coding these function calls\nwere actually necessary, but here they're redundant. I guess\nthat's just additional ammunition for Greg's point that the\nkeywords were misclassified ;-).\n\n\n\n\n\n\n\nThanks for your confirm. Is there anything I can do?\n\n\nOn Dec 12, 2019, at 11:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nAh,\n after looking in the git history, not quite that ancient:\nthis\n duplication dates to commit 258ee1b63, which moved these\nswitch\n cases from the \"if (type == RESERV)\" switches in the\nsame\n functions. In the previous coding these function calls\nwere\n actually necessary, but here they're redundant. I guess\nthat's\n just additional ammunition for Greg's point that the\nkeywords\n were misclassified ;-).",
"msg_date": "Thu, 12 Dec 2019 16:04:53 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Duplicate function call on timestamp2tm"
},
{
"msg_contents": "Li Japin <japinli@hotmail.com> writes:\n> Thanks for your confirm. Is there anything I can do?\n\nNo, I've got it.\n\nIn adding the test coverage I spoke of, I thought we should allow\nthe date_part tests to check all the entries in timestamp[tz]_tbl\nnot just those around current time, and I found an independent\nproblem:\n\n timestamp | isoyear | week | isodow | dow | doy \n-----------------------------+-----------+------+--------+-----+-----\n...\n Tue Feb 16 17:32:01 0097 BC | -96 | 7 | 2 | 2 | 47\n Sat Feb 16 17:32:01 0097 | 97 | 7 | 6 | 6 | 47\n\nthat is, the ISOYEAR case is failing to correct for BC years.\n\nWe could imagine fixing this in date2isoyear() but I think it's\nsafer to leave that function alone and do the corrections\nin timestamp[tz]_part. Note for example that formatting.c\nalready applies a BC correction to the result; and I think the\nusage in date2isoyearday() requires sticking to the year-zero-exists\nconvention, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Dec 2019 11:24:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate function call on timestamp2tm"
}
] |
[
{
"msg_contents": "I have a nagios check on ancient tempfiles, intended to catch debris left by\ncrashed processes. But triggered on this file:\n\n$ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls\n142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 12 11:32 /var/lib/pgsql/12/data/base/pgsql_tmp\n169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset\n169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0\n\nI found:\n 2019-12-07 01:35:56 | 11025 | postgres | canceling statement due to statement timeout | CLUSTER pg_stat_database_snap USI\n 2019-12-07 01:35:56 | 11025 | postgres | temporary file: path \"base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/2.0\", size 5455872 | CLUSTER pg_stat_database_snap USI\n\nI don't have a saved log entry for sharedfileset/0.0 - that may be significant,\nor may be a bug in my log-archiving script.\n\nThe process has not crashed since it started:\n\npostgres 18145 1 0 Nov18 ? 00:51:39 /usr/pgsql-12/bin/postmaster -D /var/lib/pgsql/12/data\npostgres 18147 18145 0 Nov18 ? 00:00:53 postgres: logger \npostgres 18149 18145 0 Nov18 ? 00:38:42 postgres: checkpointer \n\nversion | PostgreSQL 12.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n\nJust to be sure:\n$ ps -wwf 11025 || echo not running\nUID PID PPID C STIME TTY STAT TIME CMD\nnot running\n\nI wasn't able to reproduce it like this.\nPGOPTIONS='-c maintenance_work_mem=128MB -c client_min_messages=debug' psql postgres -c 'CREATE TABLE t (i int unique); INSERT INTO t SELECT generate_series(1,999999)' -c 'SET statement_timeout=4999' -c 'CLUSTER t USING t_i_key'\n\nActually, I tried using pg_ls_tmpdir(), but it unconditionally masks\nnon-regular files and thus shared filesets. Maybe that's worth discussion on a\nnew thread ?\n\nsrc/backend/utils/adt/genfile.c\n /* Ignore anything but regular files */\n if (!S_ISREG(attrib.st_mode))\n continue;\n\nBTW there's no other tablespaces.\n\nJustin\n\n\n",
"msg_date": "Thu, 12 Dec 2019 12:05:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "shared tempfile was not removed on statement_timeout (unreproducible)"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 7:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I have a nagios check on ancient tempfiles, intended to catch debris left by\n> crashed processes. But triggered on this file:\n>\n> $ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls\n> 142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 12 11:32 /var/lib/pgsql/12/data/base/pgsql_tmp\n> 169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset\n> 169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n> 169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0\n>\n> I found:\n> 2019-12-07 01:35:56 | 11025 | postgres | canceling statement due to statement timeout | CLUSTER pg_stat_database_snap USI\n> 2019-12-07 01:35:56 | 11025 | postgres | temporary file: path \"base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/2.0\", size 5455872 | CLUSTER pg_stat_database_snap USI\n\nHmm. I played around with this and couldn't reproduce it, but I\nthought of something. What if the statement timeout is reached while\nwe're in here:\n\n#0 PathNameDeleteTemporaryDir (dirname=0x7fffffffd010\n\"base/pgsql_tmp/pgsql_tmp28884.31.sharedfileset\") at fd.c:1471\n#1 0x0000000000a32c77 in SharedFileSetDeleteAll (fileset=0x80182e2cc)\nat sharedfileset.c:177\n#2 0x0000000000a327e1 in SharedFileSetOnDetach (segment=0x80a6e62d8,\ndatum=34385093324) at sharedfileset.c:206\n#3 0x0000000000a365ca in dsm_detach (seg=0x80a6e62d8) at dsm.c:684\n#4 0x000000000061621b in DestroyParallelContext (pcxt=0x80a708f20) at\nparallel.c:904\n#5 0x00000000005d97b3 in _bt_end_parallel (btleader=0x80fe9b4b0) at\nnbtsort.c:1473\n#6 0x00000000005d92f0 in btbuild (heap=0x80a7bc4c8,\nindex=0x80a850a50, indexInfo=0x80fec1ab0) at nbtsort.c:340\n#7 0x000000000067445b in index_build (heapRelation=0x80a7bc4c8,\nindexRelation=0x80a850a50, indexInfo=0x80fec1ab0, isreindex=true,\nparallel=true) at index.c:2963\n#8 0x0000000000677bd3 in reindex_index (indexId=16532,\nskip_constraint_checks=true, persistence=112 'p', options=0) at\nindex.c:3591\n#9 0x0000000000678402 in reindex_relation (relid=16508, flags=18,\noptions=0) at index.c:3807\n#10 0x000000000073928f in finish_heap_swap (OIDOldHeap=16508,\nOIDNewHeap=16573, is_system_catalog=false,\nswap_toast_by_content=false, check_constraints=false,\nis_internal=true, frozenXid=604, cutoffMulti=1, newrelpersistence=112\n'p') at cluster.c:1409\n#11 0x00000000007389ab in rebuild_relation (OldHeap=0x80a7bc4c8,\nindexOid=16532, verbose=false) at cluster.c:622\n#12 0x000000000073849e in cluster_rel (tableOid=16508, indexOid=16532,\noptions=0) at cluster.c:428\n#13 0x0000000000737f22 in cluster (stmt=0x800cfcbf0, isTopLevel=true)\nat cluster.c:185\n#14 0x0000000000a7cc5c in standard_ProcessUtility (pstmt=0x800cfcf40,\nqueryString=0x800cfc120 \"cluster t USING t_i_idx ;\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x800cfd030, completionTag=0x7fffffffe0b0 \"\") at utility.c:654\n\nThe CHECK_FOR_INTERRUPTS() inside the walkdir() loop could ereport()\nout of there after deleting some but not all of your files, but the\ncode in dsm_detach() has already popped the callback (which it does\n\"to avoid infinite error recursion\"), so it won't run again on error\ncleanup. Hmm. But then... maybe the two log lines you quoted should\nbe the other way around for that.\n\n> Actually, I tried using pg_ls_tmpdir(), but it unconditionally masks\n> non-regular files and thus shared filesets. Maybe that's worth discussion on a\n> new thread ?\n>\n> src/backend/utils/adt/genfile.c\n> /* Ignore anything but regular files */\n> if (!S_ISREG(attrib.st_mode))\n> continue;\n\n+1, that's worth fixing.\n\n\n",
"msg_date": "Fri, 13 Dec 2019 15:03:47 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout\n (unreproducible)"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 03:03:47PM +1300, Thomas Munro wrote:\n> On Fri, Dec 13, 2019 at 7:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I have a nagios check on ancient tempfiles, intended to catch debris left by\n> > crashed processes. But triggered on this file:\n> >\n> > $ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls\n> > 142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 12 11:32 /var/lib/pgsql/12/data/base/pgsql_tmp\n> > 169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset\n> > 169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n> > 169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0\n> >\n> > I found:\n> > 2019-12-07 01:35:56 | 11025 | postgres | canceling statement due to statement timeout | CLUSTER pg_stat_database_snap USI\n> > 2019-12-07 01:35:56 | 11025 | postgres | temporary file: path \"base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/2.0\", size 5455872 | CLUSTER pg_stat_database_snap USI\n> \n> Hmm. But then... maybe the two log lines you quoted should\n> be the other way around for that.\n\nAnd, it's actually the other way around, when I order BY something better than\nleft(log_time::text,19).\n\npostgres=# SELECT log_time pid, session_line ln, pid, database db, left(message,99), left(query,33) FROM jrn_postgres_log WHERE user_name IS NOT NULL AND log_time BETWEEN '2019-12-07 01:35' AND '2019-12-07 01:36' ORDER BY 1,2;\n\n 2019-12-07 01:35:56.626-06 | 1 | 11025 | postgres | temporary file: path \"base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/2.0\", size 5455872 | CLUSTER pg_stat_database_snap USI\n 2019-12-07 01:35:56.626-06 | 2 | 11025 | postgres | canceling statement due to statement timeout | CLUSTER pg_stat_database_snap USI\n\n\n",
"msg_date": "Thu, 12 Dec 2019 20:13:22 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout\n (unreproducible)"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 3:13 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Dec 13, 2019 at 03:03:47PM +1300, Thomas Munro wrote:\n> > On Fri, Dec 13, 2019 at 7:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > 2019-12-07 01:35:56 | 11025 | postgres | canceling statement due to statement timeout | CLUSTER pg_stat_database_snap USI\n> > > 2019-12-07 01:35:56 | 11025 | postgres | temporary file: path \"base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/2.0\", size 5455872 | CLUSTER pg_stat_database_snap USI\n> >\n> > Hmm. But then... maybe the two log lines you quoted should\n> > be the other way around for that.\n>\n> And, it's actually the other way around, when I order BY something better than\n> left(log_time::text,19).\n\nHah.\n\nOk, so it looks like we shouldn't be relying on the same code path for\n'happy' and 'error' cleanup. This could probably be fixed with a well\nplaced explicit call to SharedFileSetDeleteAll() or a new function\nSharedFileSetDestroy(), and perhaps a flag in shmem to say it's been\ndone so the callback doesn't do it again needlessly. I don't think\nthis problem is specific to parallel index creation.\n\n\n",
"msg_date": "Fri, 13 Dec 2019 15:49:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout\n (unreproducible)"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 03:03:47PM +1300, Thomas Munro wrote:\n> On Fri, Dec 13, 2019 at 7:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I have a nagios check on ancient tempfiles, intended to catch debris left by\n> > crashed processes. But triggered on this file:\n> >\n> > $ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls\n> > 142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 12 11:32 /var/lib/pgsql/12/data/base/pgsql_tmp\n> > 169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset\n> > 169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n> > 169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0\n> >\n> > I found:\n> > 2019-12-07 01:35:56 | 11025 | postgres | canceling statement due to statement timeout | CLUSTER pg_stat_database_snap USI\n> > 2019-12-07 01:35:56 | 11025 | postgres | temporary file: path \"base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/2.0\", size 5455872 | CLUSTER pg_stat_database_snap USI\n> \n> Hmm. I played around with this and couldn't reproduce it, but I\n> thought of something. What if the statement timeout is reached while\n> we're in here:\n> \n> #0 PathNameDeleteTemporaryDir (dirname=0x7fffffffd010 \"base/pgsql_tmp/pgsql_tmp28884.31.sharedfileset\") at fd.c:1471\n> #1 0x0000000000a32c77 in SharedFileSetDeleteAll (fileset=0x80182e2cc) at sharedfileset.c:177\n> #2 0x0000000000a327e1 in SharedFileSetOnDetach (segment=0x80a6e62d8, datum=34385093324) at sharedfileset.c:206\n> #3 0x0000000000a365ca in dsm_detach (seg=0x80a6e62d8) at dsm.c:684\n> #4 0x000000000061621b in DestroyParallelContext (pcxt=0x80a708f20) at parallel.c:904\n> #5 0x00000000005d97b3 in _bt_end_parallel (btleader=0x80fe9b4b0) at nbtsort.c:1473\n> #6 0x00000000005d92f0 in btbuild (heap=0x80a7bc4c8, index=0x80a850a50, indexInfo=0x80fec1ab0) at nbtsort.c:340\n...\n\n> The CHECK_FOR_INTERRUPTS() inside the walkdir() loop could ereport()\n> out of there after deleting some but not all of your files, but the\n> code in dsm_detach() has already popped the callback (which it does\n> \"to avoid infinite error recursion\"), so it won't run again on error\n> cleanup. Hmm. But then... maybe the two log lines you quoted should\n> be the other way around for that.\n\nWith inspired from re-reading your messages several times in rapid succession,\nI was able to reproduce this easily with:\n\n--- a/src/backend/storage/file/fd.c\n+++ b/src/backend/storage/file/fd.c\n@@ -3344,6 +3344,7 @@ walkdir(const char *path,\n struct stat fst;\n int sret;\n \n+ usleep(99999);\n CHECK_FOR_INTERRUPTS();\n \nOn Fri, Dec 13, 2019 at 03:49:26PM +1300, Thomas Munro wrote:\n> Ok, so it looks like we shouldn't be relying on the same code path for\n> 'happy' and 'error' cleanup. This could probably be fixed with a well\n> placed explicit call to SharedFileSetDeleteAll() or a new function\n> SharedFileSetDestroy(), and perhaps a flag in shmem to say it's been\n> done so the callback doesn't do it again needlessly. I don't think\n> this problem is specific to parallel index creation.\n\nFind below a caveman-approved patch which avoids leaving behind tmpfiles.\n\nI'm not sure how to do this cleanly, since:\n| src/include/utils/tuplesort.h: * Tuplesortstate and Sharedsort are opaque types whose details are not\n\nMaybe we need to add a new parameter like:\n| tuplesort_end(Tuplesortstate *state, bool do_delete_fileset)\n\nArguably, that has the benefit that existing callers *have* to confront whether\nthey should delete the fileset or not. This is such a minor issue that it's\nunfortunate to force a confrontation over it.\n\n-- \nJustin\n\ndiff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c\nindex efee86784b..f5b0a48d64 100644\n--- a/src/backend/access/nbtree/nbtsort.c\n+++ b/src/backend/access/nbtree/nbtsort.c\n@@ -511,12 +511,17 @@ _bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate,\n \treturn reltuples;\n }\n \n+extern void *tuplesort_shared_fileset(Tuplesortstate*);\n+\n /*\n * clean up a spool structure and its substructures.\n */\n static void\n _bt_spooldestroy(BTSpool *btspool)\n {\n+\tvoid *fileset = tuplesort_shared_fileset(btspool->sortstate);\n+\tif (fileset)\n+\t\tSharedFileSetDeleteAll(fileset);\n \ttuplesort_end(btspool->sortstate);\n \tpfree(btspool);\n }\n@@ -1669,6 +1674,10 @@ _bt_end_parallel(BTLeader *btleader)\n \t/* Free last reference to MVCC snapshot, if one was used */\n \tif (IsMVCCSnapshot(btleader->snapshot))\n \t\tUnregisterSnapshot(btleader->snapshot);\n+\n+\t// SharedFileSetDeleteAll(btleader->sharedsort->fileset);\n+\t// SharedFileSetDeleteAll(btleader->sharedsort2->fileset);\n+\n \tDestroyParallelContext(btleader->pcxt);\n \tExitParallelMode();\n }\ndiff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\nindex 5f6420efb2..f89d42f475 100644\n--- a/src/backend/storage/file/fd.c\n+++ b/src/backend/storage/file/fd.c\n@@ -3344,6 +3344,7 @@ walkdir(const char *path,\n \t\tstruct stat fst;\n \t\tint\t\t\tsret;\n \n+\t\tusleep(99999);\n \t\tCHECK_FOR_INTERRUPTS();\n \n \t\tif (strcmp(de->d_name, \".\") == 0 ||\ndiff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c\nindex 3c49476483..3de9592b78 100644\n--- a/src/backend/utils/sort/tuplesort.c\n+++ b/src/backend/utils/sort/tuplesort.c\n@@ -1375,6 +1375,11 @@ tuplesort_free(Tuplesortstate *state)\n \tMemoryContextReset(state->sortcontext);\n }\n \n+void *tuplesort_shared_fileset(Tuplesortstate *state)\n+{\n+\treturn state->shared ? &state->shared->fileset : NULL;\n+}\n+\n /*\n * tuplesort_end\n *\n\n\n",
"msg_date": "Mon, 20 Jul 2020 23:32:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Tue, Jul 21, 2020 at 4:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> /*\n> * clean up a spool structure and its substructures.\n> */\n> static void\n> _bt_spooldestroy(BTSpool *btspool)\n> {\n> + void *fileset = tuplesort_shared_fileset(btspool->sortstate);\n> + if (fileset)\n> + SharedFileSetDeleteAll(fileset);\n> tuplesort_end(btspool->sortstate);\n> pfree(btspool);\n> }\n\nWhy can't tuplesort_end do it?\n\n\n",
"msg_date": "Mon, 27 Jul 2020 20:00:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:\n> On Tue, Jul 21, 2020 at 4:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > /*\n> > * clean up a spool structure and its substructures.\n> > */\n> > static void\n> > _bt_spooldestroy(BTSpool *btspool)\n> > {\n> > + void *fileset = tuplesort_shared_fileset(btspool->sortstate);\n> > + if (fileset)\n> > + SharedFileSetDeleteAll(fileset);\n> > tuplesort_end(btspool->sortstate);\n> > pfree(btspool);\n> > }\n> \n> Why can't tuplesort_end do it?\n\nBecause then I think the parallel workers remove their own files, with tests\nfailing like:\n\n+ERROR: could not open temporary file \"0.0\" from BufFile \"0\": No such file or directory\n\nI look around a bit more and came up with this, which works, but I don't know\nenough to say if it's right.\n\ndiff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\nindex 5f6420efb2..f89d42f475 100644\n--- a/src/backend/storage/file/fd.c\n+++ b/src/backend/storage/file/fd.c\n@@ -3344,6 +3344,7 @@ walkdir(const char *path,\n \t\tstruct stat fst;\n \t\tint\t\t\tsret;\n \n+\t\tusleep(99999);\n \t\tCHECK_FOR_INTERRUPTS();\n \n \t\tif (strcmp(de->d_name, \".\") == 0 ||\ndiff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c\nindex 3c49476483..c6e5e6d00b 100644\n--- a/src/backend/utils/sort/tuplesort.c\n+++ b/src/backend/utils/sort/tuplesort.c\n@@ -1387,6 +1387,9 @@ tuplesort_free(Tuplesortstate *state)\n void\n tuplesort_end(Tuplesortstate *state)\n {\n+\tif (state->shared && state->shared->workersFinished == state->nParticipants)\n+\t\tSharedFileSetDeleteAll(&state->shared->fileset);\n+\n \ttuplesort_free(state);\n \n \t/*\n\n\n",
"msg_date": "Mon, 27 Jul 2020 05:39:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Mon, Jul 27, 2020 at 05:39:02AM -0500, Justin Pryzby wrote:\n> On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:\n> > Why can't tuplesort_end do it?\n> \n> Because then I think the parallel workers remove their own files, with tests\n> failing like:\n> \n> +ERROR: could not open temporary file \"0.0\" from BufFile \"0\": No such file or directory\n> \n> I look around a bit more and came up with this, which works, but I don't know\n> enough to say if it's right.\n\nI convinced myself this is right, since state->nParticipants==-1 for workers.\nOnly the leader should do the cleanup.\n\nAdded here:\nhttps://commitfest.postgresql.org/29/2657/\n\n-- \nJustin",
"msg_date": "Tue, 28 Jul 2020 20:37:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Wed, 29 Jul 2020 at 10:37, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jul 27, 2020 at 05:39:02AM -0500, Justin Pryzby wrote:\n> > On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:\n> > > Why can't tuplesort_end do it?\n> >\n> > Because then I think the parallel workers remove their own files, with tests\n> > failing like:\n> >\n> > +ERROR: could not open temporary file \"0.0\" from BufFile \"0\": No such file or directory\n> >\n> > I look around a bit more and came up with this, which works, but I don't know\n> > enough to say if it's right.\n>\n> I convinced myself this is right, since state->nParticipants==-1 for workers.\n> Only the leader should do the cleanup.\n>\n> Added here:\n> https://commitfest.postgresql.org/29/2657/\n\nI've also investigated this issue. As Thomas mentioned before, this\nproblem is not specific to parallel index creation. Shared temporary\nfiles could be left if the process is interrupted while deleting the\nfile as a part of the work of detaching dsm segment.\n\nTo fix this issue, possible solutions would be:\n\n1. Like the current patch, we call SharedFileSetDeleteAll() before\nDestroyParallelContext() which calls dsm_detach() so that we can make\nsure to delete these files while not relying on on_dsm_detach\ncallback. That way, even if the process is interrupted during that\ncleaning, it will clean these files again during transaction abort\n(AtEOXact_Parallel() calls dsm_detach()). OTOH a downside would be\nthat we will end up setting a rule that we need to explicitly call\nSharedFileSetDeleteAll().\n\n2. We don't use on_dsm_detach callback to delete the shared file set.\nInstead, I wonder if we can delete them at the end of the transaction\nby using ResourceOwner mechanism, like we do for non-shared temporary\nfiles cleanup. This idea doesn't have the cons that idea #1 has. OTOH,\nthe lifetime of the shared file set will change from the parallel\ncontext to the transaction, leading to keep many temporary files until\nthe transaction end. Also, we would need to rework the handling shared\nfile set.\n\n3. We use on_dsm_detach as well as on_proc_exit callback to delete the\nshared file set. It doesn't resolve the root cause but that way, even\nif the process didn’t delete it on destroying the parallel context, we\ncan make sure to delete it on process exit.\n\nI think #1 is suitable for back branches. For HEAD, I think #2 and #3\nwould be better in terms of not setting an implicit rule. Thoughts?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 28 Oct 2020 19:03:14 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "At Wed, 28 Oct 2020 19:03:14 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \r\n> On Wed, 29 Jul 2020 at 10:37, Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n> >\r\n> > On Mon, Jul 27, 2020 at 05:39:02AM -0500, Justin Pryzby wrote:\r\n> > > On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:\r\n> > > > Why can't tuplesort_end do it?\r\n> > >\r\n> > > Because then I think the parallel workers remove their own files, with tests\r\n> > > failing like:\r\n> > >\r\n> > > +ERROR: could not open temporary file \"0.0\" from BufFile \"0\": No such file or directory\r\n> > >\r\n> > > I look around a bit more and came up with this, which works, but I don't know\r\n> > > enough to say if it's right.\r\n> >\r\n> > I convinced myself this is right, since state->nParticipants==-1 for workers.\r\n> > Only the leader should do the cleanup.\r\n> >\r\n> > Added here:\r\n> > https://commitfest.postgresql.org/29/2657/\r\n\r\n+\tif (state->shared && state->shared->workersFinished == state->nParticipants)\r\n\r\nIsn't it more straight forward to check \"state->shared &&\r\nstate->worker == -1\"?\r\n\r\n> I've also investigated this issue. As Thomas mentioned before, this\r\n> problem is not specific to parallel index creation. Shared temporary\r\n> files could be left if the process is interrupted while deleting the\r\n> file as a part of the work of detaching dsm segment.\r\n> \r\n> To fix this issue, possible solutions would be:\r\n> \r\n> 1. Like the current patch, we call SharedFileSetDeleteAll() before\r\n> DestroyParallelContext() which calls dsm_detach() so that we can make\r\n> sure to delete these files while not relying on on_dsm_detach\r\n> callback. That way, even if the process is interrupted during that\r\n> cleaning, it will clean these files again during transaction abort\r\n> (AtEOXact_Parallel() calls dsm_detach()). OTOH a downside would be\r\n> that we will end up setting a rule that we need to explicitly call\r\n> SharedFileSetDeleteAll().\r\n\r\nThat seems to be common. We release lock explicitly but it is\r\nautomatically released on error. Of couse it is slightly different\r\nthat SharedFileSetOnDetach unconditionally removes the directory but\r\nthat doesn't matter as far as that behavior doesn't lead to an\r\nerror. We can skip, as Thomas suggested, the cleanup if not necessary.\r\n\r\nLooking the comment of SharedFileSetOnDetach:\r\n\r\n| * everything in them. We can't raise an error on failures, because this runs\r\n| * in error cleanup paths.\r\n\r\nI feel that a function that shouldn't error-out also shouldn't be\r\ncancellable. If that's the case, we omit the CHECK_FOR_INTERRUPTS() in\r\nwalkdir() when elevel is smaller than ERROR.\r\n\r\n=====\r\ndiff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\r\nindex b58502837a..593c23553e 100644\r\n--- a/src/backend/storage/file/fd.c\r\n+++ b/src/backend/storage/file/fd.c\r\n@@ -3374,7 +3374,9 @@ walkdir(const char *path,\r\n \t{\r\n \t\tchar\t\tsubpath[MAXPGPATH * 2];\r\n \r\n-\t\tCHECK_FOR_INTERRUPTS();\r\n+\t\t/* omit interrupts while we shouldn't error-out */\r\n+\t\tif (elevel >= ERROR)\r\n+\t\t\tCHECK_FOR_INTERRUPTS();\r\n \r\n \t\tif (strcmp(de->d_name, \".\") == 0 ||\r\n \t\t\tstrcmp(de->d_name, \"..\") == 0)\r\n=====\r\n\r\n> 2. We don't use on_dsm_detach callback to delete the shared file set.\r\n> Instead, I wonder if we can delete them at the end of the transaction\r\n> by using ResourceOwner mechanism, like we do for non-shared temporary\r\n> files cleanup. This idea doesn't have the cons that idea #1 has. OTOH,\r\n> the lifetime of the shared file set will change from the parallel\r\n> context to the transaction, leading to keep many temporary files until\r\n> the transaction end. Also, we would need to rework the handling shared\r\n> file set.\r\n> \r\n> 3. We use on_dsm_detach as well as on_proc_exit callback to delete the\r\n> shared file set. It doesn't resolve the root cause but that way, even\r\n> if the process didn’t delete it on destroying the parallel context, we\r\n> can make sure to delete it on process exit.\r\n> \r\n> I think #1 is suitable for back branches. For HEAD, I think #2 and #3\r\n> would be better in terms of not setting an implicit rule. Thoughts?\r\n\r\nAs far as we allow dms_detach being canceled, the problem persists\r\nanywhat other we do. So #2 and #3 seems a bit too much. It seems to me\r\nthat #1 + omitting CHECK_FOR_INTERRUPTS() is suitable for all\r\nbranches.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 26 Jan 2021 13:46:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On 26/01/2021 06:46, Kyotaro Horiguchi wrote:\n> Looking the comment of SharedFileSetOnDetach:\n> \n> | * everything in them. We can't raise an error on failures, because this runs\n> | * in error cleanup paths.\n> \n> I feel that a function that shouldn't error-out also shouldn't be\n> cancellable. If that's the case, we omit the CHECK_FOR_INTERRUPTS() in\n> walkdir() when elevel is smaller than ERROR.\n> \n> =====\n> diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\n> index b58502837a..593c23553e 100644\n> --- a/src/backend/storage/file/fd.c\n> +++ b/src/backend/storage/file/fd.c\n> @@ -3374,7 +3374,9 @@ walkdir(const char *path,\n> \t{\n> \t\tchar\t\tsubpath[MAXPGPATH * 2];\n> \n> -\t\tCHECK_FOR_INTERRUPTS();\n> +\t\t/* omit interrupts while we shouldn't error-out */\n> +\t\tif (elevel >= ERROR)\n> +\t\t\tCHECK_FOR_INTERRUPTS();\n> \n> \t\tif (strcmp(de->d_name, \".\") == 0 ||\n> \t\t\tstrcmp(de->d_name, \"..\") == 0)\n> =====\n\nDon't we potentially have the same problem with all on_dsm_detach \ncallbacks? Looking at the other on_dsm_detach callbacks, I don't see any \nCHECK_FOR_INTERRUPT() calls in them, but it seems fragile to assume that.\n\nI'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least \naround the removal of the callback from the list and calling the \ncallback. Maybe even over the whole function.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 26 Jan 2021 11:00:56 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "At Tue, 26 Jan 2021 11:00:56 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> On 26/01/2021 06:46, Kyotaro Horiguchi wrote:\n> > Looking the comment of SharedFileSetOnDetach:\n> > | * everything in them. We can't raise an error on failures, because this\n> > | * runs\n> > | * in error cleanup paths.\n> > I feel that a function that shouldn't error-out also shouldn't be\n> > cancellable. If that's the case, we omit the CHECK_FOR_INTERRUPTS() in\n> > walkdir() when elevel is smaller than ERROR.\n> > =====\n> > diff --git a/src/backend/storage/file/fd.c\n> > b/src/backend/storage/file/fd.c\n> > index b58502837a..593c23553e 100644\n> > --- a/src/backend/storage/file/fd.c\n> > +++ b/src/backend/storage/file/fd.c\n> > @@ -3374,7 +3374,9 @@ walkdir(const char *path,\n> > \t{\n> > \t\tchar\t\tsubpath[MAXPGPATH * 2];\n> > -\t\tCHECK_FOR_INTERRUPTS();\n> > +\t\t/* omit interrupts while we shouldn't error-out */\n> > +\t\tif (elevel >= ERROR)\n> > +\t\t\tCHECK_FOR_INTERRUPTS();\n> > \t\tif (strcmp(de->d_name, \".\") == 0 ||\n> > \t\t\tstrcmp(de->d_name, \"..\") == 0)\n> > =====\n> \n> Don't we potentially have the same problem with all on_dsm_detach\n> callbacks? Looking at the other on_dsm_detach callbacks, I don't see\n> any CHECK_FOR_INTERRUPT() calls in them, but it seems fragile to\n> assume that.\n> \n> I'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least\n> around the removal of the callback from the list and calling the\n> callback. Maybe even over the whole function.\n\nYes, I first came up with HOLD/RESUME_QUERY_INTERRUPTS() to the same\nlocation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 26 Jan 2021 20:22:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 12:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Tue, 26 Jan 2021 11:00:56 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in\n> > Don't we potentially have the same problem with all on_dsm_detach\n> > callbacks? Looking at the other on_dsm_detach callbacks, I don't see\n> > any CHECK_FOR_INTERRUPT() calls in them, but it seems fragile to\n> > assume that.\n> >\n> > I'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least\n> > around the removal of the callback from the list and calling the\n> > callback. Maybe even over the whole function.\n>\n> Yes, I first came up with HOLD/RESUME_QUERY_INTERRUPTS() to the same\n> location.\n\n+1, this seems like a good idea. This is a little bit like the code\nnear the comments \"Don't joggle the elbow of proc_exit\".\n\n\n",
"msg_date": "Wed, 27 Jan 2021 09:34:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 9:34 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jan 27, 2021 at 12:22 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Tue, 26 Jan 2021 11:00:56 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in\n> > > Don't we potentially have the same problem with all on_dsm_detach\n> > > callbacks? Looking at the other on_dsm_detach callbacks, I don't see\n> > > any CHECK_FOR_INTERRUPT() calls in them, but it seems fragile to\n> > > assume that.\n> > >\n> > > I'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least\n> > > around the removal of the callback from the list and calling the\n> > > callback. Maybe even over the whole function.\n> >\n> > Yes, I first came up with HOLD/RESUME_QUERY_INTERRUPTS() to the same\n> > location.\n>\n> +1, this seems like a good idea. This is a little bit like the code\n> near the comments \"Don't joggle the elbow of proc_exit\".\n\nSo that gives a very simple back-patchable patch.",
"msg_date": "Sun, 31 Jan 2021 14:26:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n>> +1, this seems like a good idea. This is a little bit like the code\n>> near the comments \"Don't joggle the elbow of proc_exit\".\n\n> So that gives a very simple back-patchable patch.\n\nHmm, so is the *rest* of that function perfectly okay with being\ninterrupted?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Jan 2021 00:07:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Sun, Jan 31, 2021 at 6:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > So that gives a very simple back-patchable patch.\n>\n> Hmm, so is the *rest* of that function perfectly okay with being\n> interrupted?\n\nIt looks OK to me. There aren't any CFI()s in there.\n\n\n",
"msg_date": "Fri, 5 Feb 2021 17:47:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
},
{
"msg_contents": "On Fri, Feb 5, 2021 at 5:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jan 31, 2021 at 6:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > So that gives a very simple back-patchable patch.\n> >\n> > Hmm, so is the *rest* of that function perfectly okay with being\n> > interrupted?\n>\n> It looks OK to me. There aren't any CFI()s in there.\n\nPushed. That closes CF #2657.\n\n\n",
"msg_date": "Mon, 15 Feb 2021 14:33:00 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared tempfile was not removed on statement_timeout"
}
] |
[
{
"msg_contents": "Hello! I wrote my extension for PG and am trying to run Async_Notify from it. But messages come very late! Sometimes only after I execute the notify command from the shell. What am I doing wrong? How to use Async_Notify correctly.\n \n--\nАрсен Арутюнян\n\nHello! I wrote my extension for PG and am trying to run Async_Notify from it. But messages come very late! Sometimes only after I execute the notify command from the shell. What am I doing wrong? How to use Async_Notify correctly.\n \n\n\n--\nАрсен Арутюнян",
"msg_date": "Thu, 12 Dec 2019 21:30:48 +0300",
"msg_from": "=?UTF-8?B?0JDRgNGB0LXQvSDQkNGA0YPRgtGO0L3Rj9C9?= <arutar@bk.ru>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?QXN5bmNfTm90aWZ5?="
},
{
"msg_contents": "Hi\n\npá 13. 12. 2019 v 10:00 odesílatel Арсен Арутюнян <arutar@bk.ru> napsal:\n\n> Hello! I wrote my extension for PG and am trying to run Async_Notify from\n> it. But messages come very late! Sometimes only after I execute the notify\n> command from the shell. What am I doing wrong? How to use Async_Notify\n> correctly.\n>\n>\n\nI am not sure what mechanism do you use.\n\nNotify messages are send after successful end of transaction.\n\nRegards\n\nPavel\n\n\n> --\n> Арсен Арутюнян\n>\n\nHipá 13. 12. 2019 v 10:00 odesílatel Арсен Арутюнян <arutar@bk.ru> napsal:\n\nHello! I wrote my extension for PG and am trying to run Async_Notify from it. But messages come very late! Sometimes only after I execute the notify command from the shell. What am I doing wrong? How to use Async_Notify correctly.\n I am not sure what mechanism do you use.Notify messages are send after successful end of transaction. RegardsPavel \n\n\n--\nАрсен Арутюнян",
"msg_date": "Fri, 13 Dec 2019 10:19:19 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Async_Notify"
},
{
"msg_contents": "I'm trying to send a notification from the PG extension directly.\nIt turns out the mechanism that is used in the async file. (Async_Notify) does not suit me since the function uses oldcontext = MemoryContextSwitchTo (CurTransactionContext); and MemoryContextSwitchTo (oldcontext);\nAnd I did not find the ability to add notification except the Async_Notify function.\n \nNow only SPI_execute works (\"NOTIFY chanel,’message’\", false, 0);\n \nP.S. In my extension, there is already a higher function started with oldcontext = MemoryContextSwitchTo (CurTransactionContext); and MemoryContextSwitchTo (oldcontext);\n \n \n>Пятница, 13 декабря 2019, 12:19 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>:\n> \n>Hi \n>pá 13. 12. 2019 v 10:00 odesílatel Арсен Арутюнян < arutar@bk.ru > napsal:\n>>Hello! I wrote my extension for PG and am trying to run Async_Notify from it. But messages come very late! Sometimes only after I execute the notify command from the shell. What am I doing wrong? How to use Async_Notify correctly.\n>> \n> \n>I am not sure what mechanism do you use.\n> \n>Notify messages are send after successful end of transaction.\n> \n>Regards\n> \n>Pavel\n> \n>>--\n>>Арсен Арутюнян \n \n \n--\nАрсен Арутюнян\n \n\n\nI'm trying to send a notification from the PG extension directly.\nIt turns out the mechanism that is used in the async file. (Async_Notify) does not suit me since the function uses oldcontext = MemoryContextSwitchTo (CurTransactionContext); and MemoryContextSwitchTo (oldcontext);\nAnd I did not find the ability to add notification except the Async_Notify function.\n \nNow only SPI_execute works (\"NOTIFY chanel,’message’\", false, 0);\n \nP.S. In my extension, there is already a higher function started with oldcontext = MemoryContextSwitchTo (CurTransactionContext); and MemoryContextSwitchTo (oldcontext);\n \n \n\n \n\nПятница, 13 декабря 2019, 12:19 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>:\n \n\n\n\n\n\n\n\nHi\n \n\n\npá 13. 12. 2019 v 10:00 odesílatel Арсен Арутюнян <arutar@bk.ru> napsal:\n\n\n\nHello! I wrote my extension for PG and am trying to run Async_Notify from it. But messages come very late! Sometimes only after I execute the notify command from the shell. What am I doing wrong? How to use Async_Notify correctly.\n \n\n\n\n \nI am not sure what mechanism do you use.\n \nNotify messages are send after successful end of transaction.\n \nRegards\n \nPavel\n \n\n\n\n\n\n--\nАрсен Арутюнян\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n--\nАрсен Арутюнян",
"msg_date": "Fri, 13 Dec 2019 14:30:21 +0300",
"msg_from": "=?UTF-8?B?0JDRgNGB0LXQvSDQkNGA0YPRgtGO0L3Rj9C9?= <arutar@bk.ru>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?UmVbMl06IEFzeW5jX05vdGlmeQ==?="
},
{
"msg_contents": "pá 13. 12. 2019 v 12:30 odesílatel Арсен Арутюнян <arutar@bk.ru> napsal:\n\n> I'm trying to send a notification from the PG extension directly.\n> It turns out the mechanism that is used in the async file. (Async_Notify)\n> does not suit me since the function uses oldcontext = MemoryContextSwitchTo\n> (CurTransactionContext); and MemoryContextSwitchTo (oldcontext);\n> And I did not find the ability to add notification except the Async_Notify\n> function.\n>\n> Now only SPI_execute works (\"NOTIFY chanel,’message’\", false, 0);\n>\n> P.S. In my extension, there is already a higher function started with\n> oldcontext = MemoryContextSwitchTo (CurTransactionContext); and\n> MemoryContextSwitchTo (oldcontext);\n>\n\nNOTIFY just push message to queue and waiting on end of transaction.\n\nThis is by design and you cannot to change it.\n\nRegards\n\nPavel\n\n>\n>\n>\n>\n> Пятница, 13 декабря 2019, 12:19 +03:00 от Pavel Stehule <\n> pavel.stehule@gmail.com>:\n>\n> Hi\n>\n> pá 13. 12. 2019 v 10:00 odesílatel Арсен Арутюнян <arutar@bk.ru\n> <//e.mail.ru/compose/?mailto=mailto%3aarutar@bk.ru>> napsal:\n>\n> Hello! I wrote my extension for PG and am trying to run Async_Notify from\n> it. But messages come very late! Sometimes only after I execute the notify\n> command from the shell. What am I doing wrong? How to use Async_Notify\n> correctly.\n>\n>\n>\n> I am not sure what mechanism do you use.\n>\n> Notify messages are send after successful end of transaction.\n>\n> Regards\n>\n> Pavel\n>\n>\n> --\n> Арсен Арутюнян\n>\n>\n>\n> --\n> Арсен Арутюнян\n>\n>\n\npá 13. 12. 2019 v 12:30 odesílatel Арсен Арутюнян <arutar@bk.ru> napsal:\n\n\nI'm trying to send a notification from the PG extension directly.\nIt turns out the mechanism that is used in the async file. (Async_Notify) does not suit me since the function uses oldcontext = MemoryContextSwitchTo (CurTransactionContext); and MemoryContextSwitchTo (oldcontext);\nAnd I did not find the ability to add notification except the Async_Notify function.\n \nNow only SPI_execute works (\"NOTIFY chanel,’message’\", false, 0);\n \nP.S. In my extension, there is already a higher function started with oldcontext = MemoryContextSwitchTo (CurTransactionContext); and MemoryContextSwitchTo (oldcontext);NOTIFY just push message to queue and waiting on end of transaction.This is by design and you cannot to change it.RegardsPavel\n \n \n\n \n\nПятница, 13 декабря 2019, 12:19 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>:\n \n\n\n\n\n\n\nHi\n \n\n\npá 13. 12. 2019 v 10:00 odesílatel Арсен Арутюнян <arutar@bk.ru> napsal:\n\n\n\nHello! I wrote my extension for PG and am trying to run Async_Notify from it. But messages come very late! Sometimes only after I execute the notify command from the shell. What am I doing wrong? How to use Async_Notify correctly.\n \n\n\n\n \nI am not sure what mechanism do you use.\n \nNotify messages are send after successful end of transaction.\n \nRegards\n \nPavel\n \n\n\n\n\n\n--\nАрсен Арутюнян\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n--\nАрсен Арутюнян",
"msg_date": "Fri, 13 Dec 2019 13:16:49 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: Async_Notify"
}
] |
[
{
"msg_contents": "I have just found out the hard way (cf commits 22864f6e0, 776a2c887)\nthat if one uses EXPLAIN with both ANALYZE and VERBOSE selected,\nthe output is not the same between force_parallel_mode = off and\nforce_parallel_mode = regress. This seems to me to be quite broken;\nwhat's the point of force_parallel_mode = regress if it doesn't\nproduce the same output?\n\nThe reason there's a problem is that ExplainNode() will show\nper-worker detail if both es->analyze and es->verbose are set,\neven when the only reason there's a worker process is that\nforce_parallel_mode injected a supposedly-invisible Gather.\n\nI don't see any way to fix this that doesn't involve some sort\nof \"action at a distance\". One could imagine hiding the per-worker\ndetail if we're underneath a Gather that has invisible set to\ntrue, but it's not really clear to me that that would do the\nright things in a plan with multiple Gather nodes. Any thoughts\nabout that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Dec 2019 14:01:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "force_parallel_mode = regress has a blind spot"
},
{
"msg_contents": "I wrote:\n> I have just found out the hard way (cf commits 22864f6e0, 776a2c887)\n> that if one uses EXPLAIN with both ANALYZE and VERBOSE selected,\n> the output is not the same between force_parallel_mode = off and\n> force_parallel_mode = regress. This seems to me to be quite broken;\n> what's the point of force_parallel_mode = regress if it doesn't\n> produce the same output?\n> The reason there's a problem is that ExplainNode() will show\n> per-worker detail if both es->analyze and es->verbose are set,\n> even when the only reason there's a worker process is that\n> force_parallel_mode injected a supposedly-invisible Gather.\n> I don't see any way to fix this that doesn't involve some sort\n> of \"action at a distance\". One could imagine hiding the per-worker\n> detail if we're underneath a Gather that has invisible set to\n> true, but it's not really clear to me that that would do the\n> right things in a plan with multiple Gather nodes. Any thoughts\n> about that?\n\nI took a closer look and decided that I was overthinking the problem.\nThe current implementation of hiding invisible Gathers only works for\na single Gather at the very top of the plan tree, so there's no need\nfor this adjustment to be smarter than that. If we ever want to\nrelax that, it'll be time enough to consider the possibility of\nhiding or not hiding workers in different parts of the tree.\n\nFor now, I propose the attached patch, which aside from fixing the\nproblem in 776a2c887 allows removal of a kluge in a different test case.\n\nI'm not sure whether to back-patch this. There seems no immediate\nneed to do so, but I wonder if someone might back-patch a test case\nthat depends on this fix.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 15 Dec 2019 14:22:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: force_parallel_mode = regress has a blind spot"
}
] |
[
{
"msg_contents": "There were a number of recent threads about building PostgreSQL on \nMSYS2. This has been confusing on occasion; see for example [0]. MSYS2 \nis actually a derivative of Cygwin. What most people are actually doing \nis using MSYS2 has the host environment for doing a kind of \ncross-compilation to MinGW.\n\nYou can also build natively on MSYS2, using the existing Cygwin support. \n Except that it won't work because configure doesn't recognize the \nconfig.guess output. Attached are a couple of small patches to fix that \nup. The first patch fixes configure as explained. The second patch \nfixes some warnings in ps_status.c. It's curious why the existing build \nfarm members don't issue warnings there, but maybe their compilers are \ntoo old. The third patch fixes another warning; again, not sure why \noriginal Cygwin doesn't warn. It might be a bit too broad to apply like \nthat.\n\nMSYS2 doesn't ship with cygserver AFAICT, so you can't run a PostgreSQL \nserver, but everything else should work.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/6672cebd-0c07-ce1e-36f8-6ae82c496eb0@2ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 12 Dec 2019 22:11:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "MSYS2 support"
},
{
"msg_contents": "On 2019-12-12 22:11, Peter Eisentraut wrote:\n> You can also build natively on MSYS2, using the existing Cygwin support.\n> Except that it won't work because configure doesn't recognize the\n> config.guess output. Attached are a couple of small patches to fix that\n> up. The first patch fixes configure as explained. The second patch\n> fixes some warnings in ps_status.c. It's curious why the existing build\n> farm members don't issue warnings there, but maybe their compilers are\n> too old. The third patch fixes another warning; again, not sure why\n> original Cygwin doesn't warn. It might be a bit too broad to apply like\n> that.\n\nFor completeness: These patches have all been applied. They were also \nuseful for more recent Cygwin versions than the build farm currently has.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Dec 2019 19:42:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: MSYS2 support"
}
] |
[
{
"msg_contents": "Hi,\n\nEvery SERIALIZABLEXACT holds an xmin that comes from a Snapshot, and\nSxactGlobalXmin holds the oldest of them. But a SERIALIZABLEXACT can\nlive longer than a transaction and snapshot, so now I'm wondering if\nit's possible to reach a state where there exist SERIALIZABLEXACT\nobjects with a range of xmin values that break the modular\nTransactionIdPrecedes()-based logic in SetNewSxactGlobalXmin(), which\nrelies on the set of values not spanning more than half of the 2^32\nclock. If that state is reachable, then I think the effect would be\nto call ClearOldPredicateLocks() at the wrong times (too much and\nwe'll waste CPU in that function, not enough and we'll \"leak\"\npredicate locks by not cleaning them up as eagerly as we intended).\n\n\n",
"msg_date": "Fri, 13 Dec 2019 10:30:19 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "What constrains the range of SERIALIZABLEXACT xmin values?"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-13 10:30:19 +1300, Thomas Munro wrote:\n> Every SERIALIZABLEXACT holds an xmin that comes from a Snapshot, and\n> SxactGlobalXmin holds the oldest of them. But a SERIALIZABLEXACT can\n> live longer than a transaction and snapshot, so now I'm wondering if\n> it's possible to reach a state where there exist SERIALIZABLEXACT\n> objects with a range of xmin values that break the modular\n> TransactionIdPrecedes()-based logic in SetNewSxactGlobalXmin(), which\n> relies on the set of values not spanning more than half of the 2^32\n> clock. If that state is reachable, then I think the effect would be\n> to call ClearOldPredicateLocks() at the wrong times (too much and\n> we'll waste CPU in that function, not enough and we'll \"leak\"\n> predicate locks by not cleaning them up as eagerly as we intended).\n\nI have only a weak grasp of the details of serializable, so maybe I'm\nentirely off base here: Can there actually be SERIALIZABLEXACT entries\nwith xmins that don't also exist in the table? I'd think that the fact\nthat rows with the relevant xmins won't commonly be removable would\npossibly provide enough interlock?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Dec 2019 13:44:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: What constrains the range of SERIALIZABLEXACT xmin values?"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI believe I've uncovered a bug that may cause archive status \".ready\"\r\nfiles to be created too early, which in turn may cause an incorrect\r\nversion of the corresponding WAL segment to be archived.\r\n\r\nThe crux of the issue seems to be that XLogWrite() does not wait for\r\nthe entire record to be written to disk before creating the \".ready\"\r\nfile. Instead, it just waits for the last page of the segment to be\r\nwritten before notifying the archiver. If PostgreSQL crashes before\r\nit is able to write the rest of the record, it will end up reusing the\r\n\".ready\" segment at the end of crash recovery. In the meantime, the\r\narchiver process may have already processed the old version of the\r\nsegment.\r\n\r\nThis issue seems to most often manifest as WAL corruption on standby\r\nservers after the primary server has crashed because it ran out of\r\ndisk space. I have attached a proof-of-concept patch\r\n(ready_file_fix.patch) that waits to create any \".ready\" files until\r\ncloser to the end of XLogWrite(). The patch is incorrect for a few\r\nreasons, but I hope it helps illustrate the problem. I have also\r\nattached another patch (repro_helper.patch) to be used in conjunction\r\nwith the following steps to reproduce the issue:\r\n\r\n initdb .\r\n pg_ctl -D . -o \"-c archive_mode=on -c archive_command='exit 0'\" -l log.log start\r\n pgbench -i -s 1000 postgres\r\n psql postgres -c \"SELECT pg_switch_wal();\"\r\n\r\nWith just repro_helper.patch applied, these commands should produce\r\nboth of the following log statements:\r\n\r\n PANIC: failing at inconvenient time\r\n LOG: status file already exists for \"000000010000000000000017\"\r\n\r\nWith both patches applied, the commands will only produce the first\r\nPANIC statement.\r\n\r\nAnother thing I am exploring is whether a crash in between writing the\r\nlast page of a segment and creating the \".ready\" file could cause the\r\narchiver process to skip processing it altogether. In the scenario I\r\nmention earlier, the server seems to recreate the \".ready\" file since\r\nit rewrites a portion of the segment. However, if a WAL record fits\r\nperfectly into the last section of the segment, I am not sure whether\r\nthe \".ready\" file would be created after restart.\r\n\r\nI am admittedly in the early stages of working on this problem, but I\r\nthought it would be worth reporting to the community early on in case\r\nanyone has any thoughts on or past experiences with this issue.\r\n\r\nNathan",
"msg_date": "Thu, 12 Dec 2019 22:50:20 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello.\n\nAt Thu, 12 Dec 2019 22:50:20 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> Hi hackers,\n> \n> I believe I've uncovered a bug that may cause archive status \".ready\"\n> files to be created too early, which in turn may cause an incorrect\n> version of the corresponding WAL segment to be archived.\n> \n> The crux of the issue seems to be that XLogWrite() does not wait for\n> the entire record to be written to disk before creating the \".ready\"\n> file. Instead, it just waits for the last page of the segment to be\n> written before notifying the archiver. If PostgreSQL crashes before\n> it is able to write the rest of the record, it will end up reusing the\n> \".ready\" segment at the end of crash recovery. In the meantime, the\n> archiver process may have already processed the old version of the\n> segment.\n\nYear, that can happen if the server restarted after the crash.\n\n> This issue seems to most often manifest as WAL corruption on standby\n> servers after the primary server has crashed because it ran out of\n> disk space.\n\nIn the first place, it's quite bad to set restart_after_crash to on,\nor just restart crashed master in replication set. The standby can be\nincosistent at the time of master crash, so it should be fixed using\npg_rewind or should be recreated from a base backup.\n\nEven without that archiving behavior, a standby may receive wal bytes\ninconsistent to the bytes from the same master just before crash. It\nis not limited to segment boundary. It can happen on every block\nboundary and could happen everywhere with more complecated steps.\n\nWhat you are calling as a \"problem\" seems coming from allowing the\nrestart_after_crash behavior. On the other hand, as recommended in the\ndocumentation, archive_command can refuse overwriting of the same\nsegment, but we don't impose to do that.\n\nAs the result the patch doesn't seem to save anything than setting up\nand operating correctly.\n\n> Another thing I am exploring is whether a crash in between writing the\n> last page of a segment and creating the \".ready\" file could cause the\n> archiver process to skip processing it altogether. In the scenario I\n> mention earlier, the server seems to recreate the \".ready\" file since\n> it rewrites a portion of the segment. However, if a WAL record fits\n> perfectly into the last section of the segment, I am not sure whether\n> the \".ready\" file would be created after restart.\n\nWhy that segment needs .ready after restart, even though nothing could\nbe written to the old segment?\n\n> I am admittedly in the early stages of working on this problem, but I\n> thought it would be worth reporting to the community early on in case\n> anyone has any thoughts on or past experiences with this issue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 13 Dec 2019 13:07:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 12/12/19, 8:08 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> At Thu, 12 Dec 2019 22:50:20 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\r\n>> Hi hackers,\r\n>>\r\n>> I believe I've uncovered a bug that may cause archive status \".ready\"\r\n>> files to be created too early, which in turn may cause an incorrect\r\n>> version of the corresponding WAL segment to be archived.\r\n>>\r\n>> The crux of the issue seems to be that XLogWrite() does not wait for\r\n>> the entire record to be written to disk before creating the \".ready\"\r\n>> file. Instead, it just waits for the last page of the segment to be\r\n>> written before notifying the archiver. If PostgreSQL crashes before\r\n>> it is able to write the rest of the record, it will end up reusing the\r\n>> \".ready\" segment at the end of crash recovery. In the meantime, the\r\n>> archiver process may have already processed the old version of the\r\n>> segment.\r\n>\r\n> Year, that can happen if the server restarted after the crash.\r\n>\r\n>> This issue seems to most often manifest as WAL corruption on standby\r\n>> servers after the primary server has crashed because it ran out of\r\n>> disk space.\r\n>\r\n> In the first place, it's quite bad to set restart_after_crash to on,\r\n> or just restart crashed master in replication set. The standby can be\r\n> incosistent at the time of master crash, so it should be fixed using\r\n> pg_rewind or should be recreated from a base backup.\r\n>\r\n> Even without that archiving behavior, a standby may receive wal bytes\r\n> inconsistent to the bytes from the same master just before crash. It\r\n> is not limited to segment boundary. It can happen on every block\r\n> boundary and could happen everywhere with more complecated steps.\r\n>\r\n> What you are calling as a \"problem\" seems coming from allowing the\r\n> restart_after_crash behavior. On the other hand, as recommended in the\r\n> documentation, archive_command can refuse overwriting of the same\r\n> segment, but we don't impose to do that.\r\n>\r\n> As the result the patch doesn't seem to save anything than setting up\r\n> and operating correctly.\r\n\r\nDisregarding the behavior of standby servers for a minute, I think\r\nthat what I've described is still a problem for archiving. If the\r\nsegment is archived too early, point-in-time restores that require it\r\nwill fail. If the server refuses to overwrite existing archive files,\r\nthe archiver process may fail to process the \"good\" version of the\r\nsegment until someone takes action to fix it. I think this is\r\nespecially troubling for backup utilities like pgBackRest that check\r\nthe archive_status directory independently since it is difficult to\r\nknow if the segment is truly \".ready\".\r\n\r\nI've attached a slightly improved patch to show how this might be\r\nfixed. I am curious what concerns there are about doing something\r\nlike it to prevent this scenario.\r\n\r\n>> Another thing I am exploring is whether a crash in between writing the\r\n>> last page of a segment and creating the \".ready\" file could cause the\r\n>> archiver process to skip processing it altogether. In the scenario I\r\n>> mention earlier, the server seems to recreate the \".ready\" file since\r\n>> it rewrites a portion of the segment. However, if a WAL record fits\r\n>> perfectly into the last section of the segment, I am not sure whether\r\n>> the \".ready\" file would be created after restart.\r\n>\r\n> Why that segment needs .ready after restart, even though nothing could\r\n> be written to the old segment?\r\n\r\nIf a \".ready\" file is never created for a segment, I don't think it\r\nwill be archived.\r\n\r\nNathan",
"msg_date": "Fri, 13 Dec 2019 21:24:36 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2019-Dec-13, Kyotaro Horiguchi wrote:\n\n> At Thu, 12 Dec 2019 22:50:20 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n\n> > The crux of the issue seems to be that XLogWrite() does not wait for\n> > the entire record to be written to disk before creating the \".ready\"\n> > file. Instead, it just waits for the last page of the segment to be\n> > written before notifying the archiver. If PostgreSQL crashes before\n> > it is able to write the rest of the record, it will end up reusing the\n> > \".ready\" segment at the end of crash recovery. In the meantime, the\n> > archiver process may have already processed the old version of the\n> > segment.\n> \n> Year, that can happen if the server restarted after the crash.\n\n... which is the normal way to run things, no?\n\n> > servers after the primary server has crashed because it ran out of\n> > disk space.\n> \n> In the first place, it's quite bad to set restart_after_crash to on,\n> or just restart crashed master in replication set.\n\nWhy is it bad? It's the default value.\n\n> The standby can be incosistent at the time of master crash, so it\n> should be fixed using pg_rewind or should be recreated from a base\n> backup.\n\nSurely the master will just come up and replay its WAL, and there should\nbe no inconsistency.\n\nYou seem to be thinking that a standby is promoted immediately on crash\nof the master, but this is not a given.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Dec 2019 18:33:44 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Uggg. I must apologyze for the last bogus comment.\n\nAt Fri, 13 Dec 2019 21:24:36 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 12/12/19, 8:08 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > As the result the patch doesn't seem to save anything than setting up\n> > and operating correctly.\n> \n> Disregarding the behavior of standby servers for a minute, I think\n\nI'm sorry. a continuation record split beyond a segment boundary\ndoesn't seem to harm replication. Please forget it.\n\n> that what I've described is still a problem for archiving. If the\n\nYeah, I think that happens and it seems a problem.\n\n> segment is archived too early, point-in-time restores that require it\n> will fail. If the server refuses to overwrite existing archive files,\n> the archiver process may fail to process the \"good\" version of the\n> segment until someone takes action to fix it. I think this is\n> especially troubling for backup utilities like pgBackRest that check\n> the archive_status directory independently since it is difficult to\n> know if the segment is truly \".ready\".\n> \n> I've attached a slightly improved patch to show how this might be\n> fixed. I am curious what concerns there are about doing something\n> like it to prevent this scenario.\n\nBasically, I agree to the direction, where the .ready notification is\ndelayed until all requested WAL bytes are written out.\n\nBut I think I found a corner case where the patch doesn't work. As I\nmentioned in another message, if WAL buffer was full,\nAdvanceXLInsertBuffer calls XLogWrite to write out the victim buffer\nregardless whether the last record in the page was the first half of a\ncontinuation record. XLogWrite can mark the segment as .ready even\nwith the patch.\n\nIs that correct? And do you think the corner case is worth amending?\n\nIf so, we could amend also that case by marking the last segment as\n.ready when XLogWrite writes the first bytes of the next segment. (As\nthe further corner-case, it still doesn't work if a contination record\nspans over trhee or more segments.. But I don't (or want not to) think\nwe don't need to consider that case..)\n\n\nThoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 17 Dec 2019 19:25:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Thank you Alvaro for the comment (on my comment).\n\nAt Fri, 13 Dec 2019 18:33:44 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2019-Dec-13, Kyotaro Horiguchi wrote:\n> \n> > At Thu, 12 Dec 2019 22:50:20 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> \n> > > The crux of the issue seems to be that XLogWrite() does not wait for\n> > > the entire record to be written to disk before creating the \".ready\"\n> > > file. Instead, it just waits for the last page of the segment to be\n> > > written before notifying the archiver. If PostgreSQL crashes before\n> > > it is able to write the rest of the record, it will end up reusing the\n> > > \".ready\" segment at the end of crash recovery. In the meantime, the\n> > > archiver process may have already processed the old version of the\n> > > segment.\n> > \n> > Year, that can happen if the server restarted after the crash.\n> \n> ... which is the normal way to run things, no?\n\nYes. In older version (< 10), the default value for wal_level was\nminimal. In 10, the default only for wal_level was changed to\nreplica. Still I'm not sure if restart_after_crash can be recommended\nfor streaming replcation...\n\n> Why is it bad? It's the default value.\n\nI reconsider it more deeply. And concluded that's not harm replication\nas I thought.\n\nWAL-buffer overflow may write partial continuation record and it can\nbe flushed immediately. That made me misunderstood that standby can\nreceive only the first half of a continuation record. Actually, that\nwrite doesn't advance LogwrtResult.Flush. So standby doesn't receive a\nsplit record on page boundary. (The cases where crashed mater is used\nas new standby as-is might contaminate my thought..)\n\nSorry for the bogus comment. My conclusion here is that\nrestart_after_crash doesn't seem to harm standby immediately.\n\n> > The standby can be incosistent at the time of master crash, so it\n> > should be fixed using pg_rewind or should be recreated from a base\n> > backup.\n> \n> Surely the master will just come up and replay its WAL, and there should\n> be no inconsistency.\n> \n> You seem to be thinking that a standby is promoted immediately on crash\n> of the master, but this is not a given.\n\nBasically no, but it might be mixed a bit. Anyway returning to the\nporposal, I think that XLogWrite can be called during at\nWAL-buffer-full and it can go into the last page in a segment. The\nproposed patch doesn't work since the XLogWrite call didn't write the\nwhole continuation record. But I'm not sure that corner-case is worth\namendint..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 17 Dec 2019 19:27:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 7:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> I believe I've uncovered a bug that may cause archive status \".ready\"\n> files to be created too early, which in turn may cause an incorrect\n> version of the corresponding WAL segment to be archived.\n>\n> The crux of the issue seems to be that XLogWrite() does not wait for\n> the entire record to be written to disk before creating the \".ready\"\n> file. Instead, it just waits for the last page of the segment to be\n> written before notifying the archiver. If PostgreSQL crashes before\n> it is able to write the rest of the record, it will end up reusing the\n> \".ready\" segment at the end of crash recovery. In the meantime, the\n> archiver process may have already processed the old version of the\n> segment.\n\nMaybe I'm missing something... But since XLogWrite() seems to\ncall issue_xlog_fsync() before XLogArchiveNotifySeg(), ISTM that\nthis trouble shouldn't happen. No?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 18 Dec 2019 14:10:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello.\n\nAt Wed, 18 Dec 2019 14:10:04 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> On Fri, Dec 13, 2019 at 7:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > I believe I've uncovered a bug that may cause archive status \".ready\"\n> > files to be created too early, which in turn may cause an incorrect\n> > version of the corresponding WAL segment to be archived.\n> >\n> > The crux of the issue seems to be that XLogWrite() does not wait for\n> > the entire record to be written to disk before creating the \".ready\"\n> > file. Instead, it just waits for the last page of the segment to be\n> > written before notifying the archiver. If PostgreSQL crashes before\n> > it is able to write the rest of the record, it will end up reusing the\n> > \".ready\" segment at the end of crash recovery. In the meantime, the\n> > archiver process may have already processed the old version of the\n> > segment.\n> \n> Maybe I'm missing something... But since XLogWrite() seems to\n> call issue_xlog_fsync() before XLogArchiveNotifySeg(), ISTM that\n> this trouble shouldn't happen. No?\n\nThe trouble happens after the synced file is archived. If the last\nrecord in the arcvhied segment was the first half of a continuation\nrecord and crash before writing the last half, crash recovery stops\njust before the first half and different record can be overwitten. As\nthe result the archived version of the segment becomes rotten.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 18 Dec 2019 17:36:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 12/17/19, 2:26 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> But I think I found a corner case where the patch doesn't work. As I\r\n> mentioned in another message, if WAL buffer was full,\r\n> AdvanceXLInsertBuffer calls XLogWrite to write out the victim buffer\r\n> regardless whether the last record in the page was the first half of a\r\n> continuation record. XLogWrite can mark the segment as .ready even\r\n> with the patch.\r\n>\r\n> Is that correct? And do you think the corner case is worth amending?\r\n\r\nI certainly think it is worth trying to prevent potential WAL archive\r\ncorruption in known corner cases. Your comment highlights a potential\r\nshortcoming of my patch. AFAICT there is no guarantee that\r\nXLogWrite() is called with a complete WAL record. Even if that\r\nassumption is true at the moment, it might not hold up over time.\r\n\r\n> If so, we could amend also that case by marking the last segment as\r\n> .ready when XLogWrite writes the first bytes of the next segment. (As\r\n> the further corner-case, it still doesn't work if a contination record\r\n> spans over trhee or more segments.. But I don't (or want not to) think\r\n> we don't need to consider that case..)\r\n\r\nI'm working on a new version of the patch that will actually look at\r\nthe WAL page metadata to determine when it is safe to mark a segment\r\nas ready for archival. It seems relatively easy to figure out whether\r\na page is the last one for the current WAL record.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 18 Dec 2019 16:34:05 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 12/18/19, 8:34 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 12/17/19, 2:26 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n>> If so, we could amend also that case by marking the last segment as\r\n>> .ready when XLogWrite writes the first bytes of the next segment. (As\r\n>> the further corner-case, it still doesn't work if a contination record\r\n>> spans over trhee or more segments.. But I don't (or want not to) think\r\n>> we don't need to consider that case..)\r\n>\r\n> I'm working on a new version of the patch that will actually look at\r\n> the WAL page metadata to determine when it is safe to mark a segment\r\n> as ready for archival. It seems relatively easy to figure out whether\r\n> a page is the last one for the current WAL record.\r\n\r\nI stand corrected. My attempts to add logic to check the WAL records\r\nadded quite a bit more complexity than seemed reasonable to maintain\r\nin this code path. For example, I didn't anticipate things like\r\nXLOG_SWITCH records.\r\n\r\nI am still concerned about the corner case you noted, but I have yet\r\nto find a practical way to handle it. You suggested waiting until\r\nwriting the first bytes of the next segment before marking a segment\r\nas ready, but I'm not sure that fixes this problem either, and I\r\nwonder if it could result in waiting arbitrarily long before creating\r\na \".ready\" file in some cases. Perhaps I am misunderstanding your\r\nsuggestion.\r\n\r\nAnother thing I noticed is that any changes in this area could impact\r\narchive_timeout. If we reset the archive_timeout timer when we mark\r\nthe segments ready, we could force WAL switches more often. If we do\r\nnot move the timer logic, we could be resetting it before the file is\r\nready for the archiver. However, these differences might be subtle\r\nenough to be okay.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 21 Dec 2019 01:18:24 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Sat, 21 Dec 2019 01:18:24 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 12/18/19, 8:34 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> > On 12/17/19, 2:26 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> >> If so, we could amend also that case by marking the last segment as\n> >> .ready when XLogWrite writes the first bytes of the next segment. (As\n> >> the further corner-case, it still doesn't work if a contination record\n> >> spans over trhee or more segments.. But I don't (or want not to) think\n> >> we don't need to consider that case..)\n> >\n> > I'm working on a new version of the patch that will actually look at\n> > the WAL page metadata to determine when it is safe to mark a segment\n> > as ready for archival. It seems relatively easy to figure out whether\n> > a page is the last one for the current WAL record.\n> \n> I stand corrected. My attempts to add logic to check the WAL records\n> added quite a bit more complexity than seemed reasonable to maintain\n> in this code path. For example, I didn't anticipate things like\n> XLOG_SWITCH records.\n> \n> I am still concerned about the corner case you noted, but I have yet\n> to find a practical way to handle it. You suggested waiting until\n> writing the first bytes of the next segment before marking a segment\n> as ready, but I'm not sure that fixes this problem either, and I\n> wonder if it could result in waiting arbitrarily long before creating\n> a \".ready\" file in some cases. Perhaps I am misunderstanding your\n> suggestion.\n> \n> Another thing I noticed is that any changes in this area could impact\n> archive_timeout. If we reset the archive_timeout timer when we mark\n> the segments ready, we could force WAL switches more often. If we do\n> not move the timer logic, we could be resetting it before the file is\n> ready for the archiver. However, these differences might be subtle\n> enough to be okay.\n\nYou're right. That doesn't seem to work. Another thing I had in my\nmind was that XLogWrite had an additional flag so that\nAdvanceXLInsertBuffer can tell not to mark .ready. The function is\ncalled while it *is* writing a complete record. So even if\nAdvanceXLInsertBuffer inhibit marking .ready the succeeding bytes\ncomes soon and we can mark the old segment as .ready at the time.\n\n..\n+ * If record_write == false, we don't mark the last segment as .ready\n+ * if the caller requested to write up to segment boundary.\n..\n static void\n- XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n+ XLogWrite(XLogwrtRqst WriteRqst, bool flexible, bool record_write)\n\nWhen XLogWrite is called with record_write = false, we don't mark\n.ready and don't advance lastSegSwitchTime/LSN. At the next time\nXLogWrite is called with record_write=true, if lastSegSwitchLSN is\nbehind the latest segment boundary before or equal to\nLogwrtResult.Write, mark the skipped segments as .ready and update\nlastSegSwitchTime/LSN.\n\nDoes the above make sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Dec 2019 11:08:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 12/23/19, 6:09 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> You're right. That doesn't seem to work. Another thing I had in my\r\n> mind was that XLogWrite had an additional flag so that\r\n> AdvanceXLInsertBuffer can tell not to mark .ready. The function is\r\n> called while it *is* writing a complete record. So even if\r\n> AdvanceXLInsertBuffer inhibit marking .ready the succeeding bytes\r\n> comes soon and we can mark the old segment as .ready at the time.\r\n>\r\n> ..\r\n> + * If record_write == false, we don't mark the last segment as .ready\r\n> + * if the caller requested to write up to segment boundary.\r\n> ..\r\n> static void\r\n> - XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\r\n> + XLogWrite(XLogwrtRqst WriteRqst, bool flexible, bool record_write)\r\n>\r\n> When XLogWrite is called with record_write = false, we don't mark\r\n> .ready and don't advance lastSegSwitchTime/LSN. At the next time\r\n> XLogWrite is called with record_write=true, if lastSegSwitchLSN is\r\n> behind the latest segment boundary before or equal to\r\n> LogwrtResult.Write, mark the skipped segments as .ready and update\r\n> lastSegSwitchTime/LSN.\r\n\r\nThanks for the suggestion. I explored this proposal a bit today.\r\nIt looks like there are three places where XLogWrite() is called:\r\nAdvanceXLInsertBuffer(), XLogFlush(), and XLogBackgroundFlush(). IIUC\r\nwhile XLogFlush() generally seems to be used to write complete records\r\nto disk, this might not be true for XLogBackgroundFlush(), and we're\r\nreasonably sure we cannot make such an assumption for\r\nAdvanceXLInsertBuffer(). Therefore, we would likely only set\r\nrecord_write to true for XLogFlush() and for certain calls to\r\nXLogBackgroundFlush (e.g. flushing asynchronous commits).\r\n\r\nI'm worried that this approach could be fragile and that we could end\r\nup waiting an arbitrarily long time before marking segments as ready\r\nfor archival. Even if we pay very close attention to the latest\r\nflushed LSN, it seems possible that a non-record_write call to\r\nXLogWrite() advances things such that we avoid ever calling it with\r\nrecord_write = true. For example, XLogBackgroundFlush() may have\r\nflushed the completed blocks, which we cannot assume are complete\r\nrecords. Then, XLogFlush() would skip calling XLogWrite() if\r\nLogwrtResult.Flush is sufficiently far ahead. In this scenario, I\r\ndon't think we would mark any eligible segments as \".ready\" until the\r\nnext call to XLogWrite() with record_write = true, which may never\r\nhappen.\r\n\r\nThe next approach I'm going to try is having the callers of\r\nXLogWrite() manage marking segments ready. That might make it easier\r\nto mitigate some of my concerns above, but I'm not tremendously\r\noptimistic that this approach will fare any better.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 31 Dec 2019 01:49:21 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Sorry for the long delay.\r\n\r\nI've finally gotten to a new approach that I think is promising. My\r\nprevious attempts to fix this within XLogWrite() or within the\r\nassociated code paths all seemed to miss corner cases or to add far\r\ntoo much complexity. The new proof-of-concept patch that I have\r\nattached is much different. Instead of trying to adjust the ready-\r\nfor-archive logic in the XLogWrite() code paths, I propose relocating\r\nthe ready-for-archive logic to a separate process.\r\n\r\nThe v3 patch is a proof-of-concept patch that moves the ready-for-\r\narchive logic to the WAL writer process. We mark files as ready-for-\r\narchive when the WAL flush pointer has advanced beyond a known WAL\r\nrecord boundary. In this patch, I am using the WAL insert location as\r\nthe known WAL record boundary. The main idea is that it should be\r\nsafe to archive a segment once we know the last WAL record for the WAL\r\nsegment, which may overflow into the following segment, has been\r\ncompletely written to disk.\r\n\r\nThere are many things missing from this proof-of-concept patch that\r\nwill need to be handled if this approach seems reasonable. For\r\nexample, I haven't looked into any adjustments needed for the\r\narchive_timeout parameter, I haven't added a way to persist the\r\n\"latest segment marked ready-for-archive\" through crashes, I haven't\r\ntried reducing the frequency of retrieving the WAL locations, and I'm\r\nnot sure the WAL writer process is even the right location for this\r\nlogic. However, these remaining problems all seem tractable to me.\r\n\r\nI would appreciate your feedback on whether you believe this approach\r\nis worth pursuing.\r\n\r\nNathan",
"msg_date": "Thu, 26 Mar 2020 18:50:24 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "\r\n2020-03-26 18:50:24 Bossart, Nathan <bossartn(at)amazon(dot)com> wrote:\r\n> The v3 patch is a proof-of-concept patch that moves the ready-for-\r\n> archive logic to the WAL writer process. We mark files as ready-for-\r\n> archive when the WAL flush pointer has advanced beyond a known WAL\r\n> record boundary.\r\n\r\n\r\nI like such a simple resolution, but I cannot agree it.\r\n\r\n1.\r\nThis patch makes wal_writer_delay to have two meanings. For example,\r\nan user setting the parameter to a bigger value gets a archived file\r\nlater.\r\n\r\n2.\r\nEven if we create a new parameter, we and users cannot determine the\r\nbest value.\r\n\r\n3.\r\nPostgreSQL guarantees that if a database cluster stopped smartly,\r\nthe cluster flushed and archived all WAL record as follows.\r\n\r\n [xlog.c]\r\n * If archiving is enabled, rotate the last XLOG file so that all the\r\n * remaining records are archived (postmaster wakes up the archiver\r\n * process one more time at the end of shutdown). The checkpoint\r\n * record will go to the next XLOG file and won't be archived (yet).\r\n\r\nTherefore, the idea may need that end-synchronization between WalWriter\r\nand archiver(pgarch). I cannot agree it because processing for stopping\r\nsystem has complexity inherently and the syncronization makes it more \r\ncomplicated. Your idea gives up currency of the notifying instead of simplicity,\r\nbut I think that the synchronization may ruin its merit.\r\n\r\n4.\r\nI found the patch spills a chance for notifying. We have to be more careful.\r\nAt the following case, WalWriter will notify after a little less than 3 times\r\nof wal_writer_delay in worst case. It may not be allowed depending on value\r\nof wal_writer_delay. If we create a new parameter, we cannot explain to user about it.\r\n\r\nPremise:\r\n- Seg1 has been already notified.\r\n- FlushedPtr is 0/2D00000 (= all WAL record is flushed).\r\n\r\n-----\r\nStep 1.\r\nBackend-A updates InsertPtr to 0/2E00000, but does not\r\ncopy WAL record to buffer.\r\n\r\nStep 2. (sleep)\r\nWalWriter memorize InsertPtr 0/2E00000 to the local variable\r\n(LocalInsertPtr) and sleep because FlushedPtr has not passed\r\nInsertPtr.\r\n\r\nStep 3.\r\nBackend-A copies WAL record to buffer.\r\n\r\nStep 4.\r\nBackend-B process updates InsertPtr to 0/3100000,\r\ncopies their record to buffer, commits (flushes it by itself),\r\nand updates FlushedPtr to 0/3100000.\r\n\r\nStep 5.\r\nWalWriter detects that FlushedPtr(0/3100000) passes\r\nLocalInsertPtr(0/2E00000), but WalWriter cannot notify Seg2\r\nthough it should be notified.\r\n\r\nIt is caused by that WalWriter does not know that\r\nwhich record is crossing segment boundary.\r\n\r\nThen, after two sleeping for cheking that InsertPtr passes\r\nFlushedPtr again in worst case, Seg2 is notified.\r\n\r\nStep 6. (sleep)\r\nWalWriter sleep.\r\n\r\nStep 7.\r\nBackend-C inserts WAL record, flush, and updates as follows:\r\nInsertPtr --> 0/3200000\r\nFlushedPtr --> 0/3200000\r\n\r\nStep 8.\r\nBackend-D updates InsertPtr to 0/3300000, but does not copy\r\nrecord to buffer.\r\n\r\nStep 9. (sleep)\r\nWalWriter memorize InsertPtr 0/3300000 to LocalInsertPtr\r\nand sleep because FlushedPtr has been 0/3200000.\r\n\r\nStep 10.\r\nBackend-D copies its record.\r\n\r\nStep 11.\r\nSomeone(Backend-X or WalWriter) flushes and updates FlushedPtr\r\nto 0/3300000.\r\n\r\nStep 12.\r\nWalWriter detects that FlushedPtr(0/3300000) passes\r\nLocalInsertPtr(0/3300000) and notify Seg2.\r\n-----\r\n\r\n\r\nI'm preparing a patch that backend inserting segment-crossboundary\r\nWAL record leaves its EndRecPtr and someone flushing it checks\r\nthe EndRecPtr and notifies..\r\n\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Fri, 29 May 2020 06:41:40 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 5/28/20, 11:42 PM, \"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com> wrote:\r\n> I'm preparing a patch that backend inserting segment-crossboundary\r\n> WAL record leaves its EndRecPtr and someone flushing it checks\r\n> the EndRecPtr and notifies..\r\n\r\nThank you for sharing your thoughts. I will be happy to take a look\r\nat your patch.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sun, 31 May 2020 22:28:30 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "> On 5/28/20, 11:42 PM, \"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>\r\n> wrote:\r\n> > I'm preparing a patch that backend inserting segment-crossboundary\r\n> > WAL record leaves its EndRecPtr and someone flushing it checks\r\n> > the EndRecPtr and notifies..\r\n\r\n\r\nI'm sorry for my slow work.\r\n\r\nI attach a patch.\r\nI also attach a simple target test for primary.\r\n\r\n\r\n1. Description in primary side\r\n\r\n[Basic problem]\r\n A process flushing WAL record doesn't know whether the flushed RecPtr is \r\n EndRecPtr of cross-segment-boundary WAL record or not because only process \r\n inserting the WAL record knows it and it never memorizes the information to anywhere.\r\n\r\n[Basic concept of the patch in primary]\r\n A process inserting a cross-segment-boundary WAL record memorizes its EndRecPtr\r\n (I call it CrossBoundaryEndRecPtr) to a new structure in XLogCtl.\r\n A flushing process creates .ready (Later, I call it just 'notify'.) against \r\n a segment that is previous one including CrossBoundaryEndRecPtr only when its \r\n flushed RecPtr is equal or greater than the CrossBoundaryEndRecPtr.\r\n\r\n[Detail of implementation in primary]\r\n* Structure of CrossBoundaryEndRecPtrs\r\n Requirement of structure is as the following:\r\n - System must memorize multiple CrossBoundaryEndRecPtr.\r\n - A flushing process must determine to notify or not with only flushed RecPtr briefly.\r\n\r\n Therefore, I implemented the structure as an array (I call it CrossBoundaryEndRecPtr array)\r\n that is same as xlblck array. Strictly, it is enogh that the length is\r\n 'xbuffers/wal_segment_size', but I choose 'xbuffers' for simplicity that makes\r\n enable the flushing process to use XLogRecPtrToBufIdx().\r\n See also the definition of XLogCtl, XLOGShmemSize(), and XLOGShmemInit() in my patch.\r\n\r\n* Action of inserting process\r\n A inserting process memorie its CrossBoundaryEndRecPtr to CrossBoundaryEndRecPtr\r\n array element calculated by XLogRecPtrToBufIdx with its CrossBoundaryEndRecPtr.\r\n If the WAL record crosses many segments, only element against last segment\r\n including the EndRecPtr is set and others are not set.\r\n See also CopyXLogRecordToWAL() in my patch.\r\n\r\n* Action of flushing process\r\n Overview has been already written as the follwing.\r\n A flushing process creates .ready (Later, I call it just 'notify'.) against \r\n a segment that is previous one including CrossBoundaryEndRecPtr only when its \r\n flushed RecPtr is equal or greater than the CrossBoundaryEndRecPtr.\r\n\r\n An additional detail is as the following. The flushing process may notify\r\n many segments if the record crosses many segments, so the process memorizes\r\n latest notified segment number to latestArchiveNotifiedSegNo in XLogCtl.\r\n The process notifies from latestArchiveNotifiedSegNo + 1 to\r\n flushing segment number - 1.\r\n\r\n And latestArchiveNotifiedSegNo is set to EndOfLog after Startup process exits\r\n replay-loop. Standby also set same timing (= before promoting).\r\n\r\n Mutual exlusion about latestArchiveNotifiedSegNo is not required because\r\n the timing of accessing has been already included in WALWriteLock critical section.\r\n\r\n\r\n2. Description in standby side\r\n\r\n* Who notifies?\r\n walreceiver also doesn't know whether the flushed RecPtr is EndRecPtr of\r\n cross-segment-boundary WAL record or not. In standby, only Startup process\r\n knows the information because it is hidden in WAL record itself and only\r\n Startup process reads and builds WAL record.\r\n\r\n* Action of Statup process\r\n Therefore, I implemented that walreceiver never notify and Startup process does it.\r\n In detail, when Startup process reads one full-length WAL record, it notifies\r\n from a segment that includes head(ReadRecPtr) of the record to a previous segment that \r\n includes EndRecPtr of the record.\r\n\r\n Now, we must pay attention about switching time line.\r\n The last segment of previous TimeLineID must be notified before switching.\r\n This case is considered when RM_XLOG_ID is detected.\r\n\r\n\r\n3. About other notifying for segment\r\nTwo notifyings for segment are remain. They are not needed to fix.\r\n\r\n(1) Notifying for partial segment\r\nIt is not needed to be completed, so it's OK to notify without special consideration.\r\n\r\n(2) Re-notifying\r\nCurrently, Checkpointer has notified through XLogArchiveCheckDone().\r\nIt is a safe-net for failure of notifying by backend or WAL writer.\r\nBackend or WAL writer doesn't retry to notify if falis, but Checkpointer retries\r\nto notify when it removes old segment. If it fails to notify, then it does not\r\nremove the segment. It makes Checkpointer to retry notify until the notifying suceeeds.\r\nAlso, in this case, we can just notify whithout special consideration\r\nbecause Checkpointer guarantees that all WAL record included in the segment have been already flushed.\r\n\r\n\r\nPlease, your review and comments.\r\n\r\n\r\nRegards\r\nRyo Matsumura",
"msg_date": "Fri, 19 Jun 2020 10:18:34 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello. Matsumura-san.\n\nI agree that WAL writer is not the place to notify segmnet. And the\ndirection you suggested would work.\n\nAt Fri, 19 Jun 2020 10:18:34 +0000, \"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com> wrote in \n> 1. Description in primary side\n> \n> [Basic problem]\n> A process flushing WAL record doesn't know whether the flushed RecPtr is \n> EndRecPtr of cross-segment-boundary WAL record or not because only process \n> inserting the WAL record knows it and it never memorizes the information to anywhere.\n> \n> [Basic concept of the patch in primary]\n> A process inserting a cross-segment-boundary WAL record memorizes its EndRecPtr\n> (I call it CrossBoundaryEndRecPtr) to a new structure in XLogCtl.\n> A flushing process creates .ready (Later, I call it just 'notify'.) against \n> a segment that is previous one including CrossBoundaryEndRecPtr only when its \n> flushed RecPtr is equal or greater than the CrossBoundaryEndRecPtr.\n...\n> See also the definition of XLogCtl, XLOGShmemSize(), and XLOGShmemInit() in my patch.\n\nI think we don't need most of that shmem stuff. XLogWrite is called\nafter WAL buffer is filled up to the requested position. So when it\ncrosses segment boundary we know the all past corss segment-boundary\nrecords are stable. That means all we need to remember is only the\nposition of the latest corss-boundary record.\n\n> * Action of inserting process\n> A inserting process memorie its CrossBoundaryEndRecPtr to CrossBoundaryEndRecPtr\n> array element calculated by XLogRecPtrToBufIdx with its CrossBoundaryEndRecPtr.\n> If the WAL record crosses many segments, only element against last segment\n> including the EndRecPtr is set and others are not set.\n> See also CopyXLogRecordToWAL() in my patch.\n\nIf we call XLogMarkEndRecPtrIfNeeded() there, the function is called\nevery time a record is written, most of which are wasteful.\nXLogInsertRecord already has a code block executed only at every page\nboundary.\n\n> * Action of flushing process\n> Overview has been already written as the follwing.\n> A flushing process creates .ready (Later, I call it just 'notify'.) against \n> a segment that is previous one including CrossBoundaryEndRecPtr only when its \n> flushed RecPtr is equal or greater than the CrossBoundaryEndRecPtr.\n> \n> An additional detail is as the following. The flushing process may notify\n> many segments if the record crosses many segments, so the process memorizes\n> latest notified segment number to latestArchiveNotifiedSegNo in XLogCtl.\n> The process notifies from latestArchiveNotifiedSegNo + 1 to\n> flushing segment number - 1.\n>\n> And latestArchiveNotifiedSegNo is set to EndOfLog after Startup process exits\n> replay-loop. Standby also set same timing (= before promoting).\n> \n> Mutual exlusion about latestArchiveNotifiedSegNo is not required because\n> the timing of accessing has been already included in WALWriteLock critical section.\n\nLooks reasonable.\n\n> 2. Description in standby side\n> \n> * Who notifies?\n> walreceiver also doesn't know whether the flushed RecPtr is EndRecPtr of\n> cross-segment-boundary WAL record or not. In standby, only Startup process\n> knows the information because it is hidden in WAL record itself and only\n> Startup process reads and builds WAL record.\n\nStandby doesn't write it's own WAL records. Even if primary sent an\nimmature record on segment boundary, it just would promote to a new\nTLI and starts its own history. Nothing breaks. However it could be a\nproblem if a standby that crashed the problematic way were started\nas-is as a primary, such scenario is out of our scope.\n\nNow we can identify stable portion of WAL stream. It's enough to\nprevent walsender from sending data that can be overwritten\nafterwards. GetReplicationTargetRecPtr() in the attached does that.\n\n> * Action of Statup process\n> Therefore, I implemented that walreceiver never notify and Startup process does it.\n> In detail, when Startup process reads one full-length WAL record, it notifies\n> from a segment that includes head(ReadRecPtr) of the record to a previous segment that \n> includes EndRecPtr of the record.\n\nI don't like that archiving on standby relies on replay progress. We\nshould avoid that and fortunately I think we dont need it.\n\n> Now, we must pay attention about switching time line.\n> The last segment of previous TimeLineID must be notified before switching.\n> This case is considered when RM_XLOG_ID is detected.\n\nThat segment is archived after renamed as \".partial\" later. We don't\narchive the last incomplete segment of the previous timeline as-is.\n\n> 3. About other notifying for segment\n> Two notifyings for segment are remain. They are not needed to fix.\n> \n> (1) Notifying for partial segment\n> It is not needed to be completed, so it's OK to notify without special consideration.\n> \n> (2) Re-notifying\n> Currently, Checkpointer has notified through XLogArchiveCheckDone().\n> It is a safe-net for failure of notifying by backend or WAL writer.\n> Backend or WAL writer doesn't retry to notify if falis, but Checkpointer retries\n> to notify when it removes old segment. If it fails to notify, then it does not\n> remove the segment. It makes Checkpointer to retry notify until the notifying suceeeds.\n> Also, in this case, we can just notify whithout special consideration\n> because Checkpointer guarantees that all WAL record included in the segment have been already flushed.\n\n\nSo it can be simplified as the attached. Any thoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 25 Jun 2020 15:35:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello, Horiguchi-san\n\nThank you for your comment and patch.\n\nAt Thursday, June 25, 2020 3:36 PM(JST), \"Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>\" wrote in\n> I think we don't need most of that shmem stuff. XLogWrite is called\n\nI wanted no more shmem stuff too, but other ideas need more lock\nthat excludes inserter and writer each other.\n\n> after WAL buffer is filled up to the requested position. So when it\n> crosses segment boundary we know the all past corss segment-boundary\n> records are stable. That means all we need to remember is only the\n> position of the latest corss-boundary record.\n\nI could not agree. In the following case, it may not work well.\n- record-A and record-B (record-B is a newer one) is copied, and\n- lastSegContRecStart/End points to record-B's, and\n- FlushPtr is proceeded to in the middle of record-A.\n\nIn the above case, the writer should notify segments before record-A,\nbut it notifies ones before record-B. If the writer notifies\nonly when it flushes the latest record completely, it works well.\nBut the writer may not be enable to notify any segment forever when\nWAL records crossing-segment-boundary are inserted contiunuously.\n\nSo I think that we must remeber all such cross-segement-boundary records's EndRecPtr in buffer.\n\n\n> If we call XLogMarkEndRecPtrIfNeeded() there, the function is called\n> every time a record is written, most of which are wasteful.\n> XLogInsertRecord already has a code block executed only at every page\n> boundary.\n\nI agree.\nXLogMarkEndRecPtrIfNeeded() is moved into the code block before updating\nLogwrtRqst.Write for avoiding passing-each-other with writer.\n\n\n> Now we can identify stable portion of WAL stream. It's enough to\n> prevent walsender from sending data that can be overwritten\n> afterwards. GetReplicationTargetRecPtr() in the attached does that.\n\nI didn't notice it.\nI agree basically, but it is based on lastSegContRecStart/End.\n\nSo, first of all, we have to agree what should be remebered.\n\n\nRegards\nRyo Matsumura\n\n\n",
"msg_date": "Mon, 6 Jul 2020 04:02:23 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello, Matsumura-san.\n\nAt Mon, 6 Jul 2020 04:02:23 +0000, \"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com> wrote in \n> Hello, Horiguchi-san\n> \n> Thank you for your comment and patch.\n> \n> At Thursday, June 25, 2020 3:36 PM(JST), \"Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>\" wrote in\n> > I think we don't need most of that shmem stuff. XLogWrite is called\n> \n> I wanted no more shmem stuff too, but other ideas need more lock\n> that excludes inserter and writer each other.\n> \n> > after WAL buffer is filled up to the requested position. So when it\n> > crosses segment boundary we know the all past corss segment-boundary\n> > records are stable. That means all we need to remember is only the\n> > position of the latest corss-boundary record.\n> \n> I could not agree. In the following case, it may not work well.\n> - record-A and record-B (record-B is a newer one) is copied, and\n> - lastSegContRecStart/End points to record-B's, and\n> - FlushPtr is proceeded to in the middle of record-A.\n\nIIUC, that means record-B is a cross segment-border record and we hav e\nflushed beyond the recrod-B. In that case crash recovery afterwards\ncan read the complete record-B and will finish recovery *after* the\nrecord-B. That's what we need here.\n\n> In the above case, the writer should notify segments before record-A,\n> but it notifies ones before record-B. If the writer notifies\n\nIf you mean that NotifyStableSegments notifies up-to the previous\nsegment of the segment where record-A is placed, that's wrong. The\nissue here is crash recovery sees an incomplete record at a\nsegment-border. So it is sufficient that crash recoery can read the\nlast record by looking pg_wal.\n\n> only when it flushes the latest record completely, it works well.\n\nIt confirms that \"lastSegContRecEnd < LogwrtResult.Flush\", that means\nthe last record(B) is completely flushed-out, isn't that? So it works\nwell.\n\n> But the writer may not be enable to notify any segment forever when\n> WAL records crossing-segment-boundary are inserted contiunuously.\n\nNo. As I mentioned in the preivous main, if we see a\ncross-segment-boundary record, the previous cross-segment-boundary\nrecord is flushed completely, and the segment containing the\nfirst-half of the previous cross-segment-boundary record has already\nbeen flushed. I didin't that but we can put an assertion in\nXLogInsertRecord like this:\n\n + /* Remember the range of the record if it spans over segments */\n + XLByteToSeg(StartPos, startseg, wal_segment_size);\n + XLByteToPrevSeg(EndPos, endseg, wal_segment_size);\n +\n + if (startseg != endseg)\n + {\n++ /* we shouldn't have a record spanning over three or more segments */\n++ Assert(endseg = startseg + 1);\n + SpinLockAcquire(&XLogCtl->info_lck);\n + if (XLogCtl->lastSegContRecEnd < StartPos)\n + {\n + XLogCtl->lastSegContRecStart = StartPos;\n + XLogCtl->lastSegContRecEnd = EndPos;\n\n> So I think that we must remeber all such cross-segement-boundary records's EndRecPtr in buffer.\n> \n> \n> > If we call XLogMarkEndRecPtrIfNeeded() there, the function is called\n> > every time a record is written, most of which are wasteful.\n> > XLogInsertRecord already has a code block executed only at every page\n> > boundary.\n> \n> I agree.\n> XLogMarkEndRecPtrIfNeeded() is moved into the code block before updating\n> LogwrtRqst.Write for avoiding passing-each-other with writer.\n> \n> \n> > Now we can identify stable portion of WAL stream. It's enough to\n> > prevent walsender from sending data that can be overwritten\n> > afterwards. GetReplicationTargetRecPtr() in the attached does that.\n> \n> I didn't notice it.\n> I agree basically, but it is based on lastSegContRecStart/End.\n> \n> So, first of all, we have to agree what should be remebered.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 06 Jul 2020 14:13:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello, Horiguchi-san\n\nAt Monday, July 6, 2020 05:13:40 +0000, \"Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>\" wrote in\n> > > after WAL buffer is filled up to the requested position. So when it\n> > > crosses segment boundary we know the all past corss segment-boundary\n> > > records are stable. That means all we need to remember is only the\n> > > position of the latest corss-boundary record.\n> > \n> > I could not agree. In the following case, it may not work well.\n> > - record-A and record-B (record-B is a newer one) is copied, and\n> > - lastSegContRecStart/End points to record-B's, and\n> > - FlushPtr is proceeded to in the middle of record-A.\n>\n> IIUC, that means record-B is a cross segment-border record and we hav e\n> flushed beyond the recrod-B. In that case crash recovery afterwards\n> can read the complete record-B and will finish recovery *after* the\n> record-B. That's what we need here.\n\nI'm sorry I didn't explain enough.\n\nRecord-A and Record-B are cross segment-border records.\nRecord-A spans segment X and X+1\nRecord-B spans segment X+2 and X+3.\nIf both records have been inserted to WAL buffer, lastSegContRecStart/End points to Record-B.\nIf a writer flushes upto the middle of segment-X+1, NotifyStableSegments() allows the writer to notify segment-X.\nIs my understanding correct?\n\nRegards\nRyo Matsumrua\n\n\n",
"msg_date": "Tue, 7 Jul 2020 09:02:56 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello.\n\n# Sorry, I wrongly thought that I replied to this thread..\n\nAt Tue, 7 Jul 2020 09:02:56 +0000, \"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com> wrote in \n> At Monday, July 6, 2020 05:13:40 +0000, \"Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>\" wrote in\n> > > > after WAL buffer is filled up to the requested position. So when it\n> > > > crosses segment boundary we know the all past corss segment-boundary\n> > > > records are stable. That means all we need to remember is only the\n> > > > position of the latest corss-boundary record.\n> > > \n> > > I could not agree. In the following case, it may not work well.\n> > > - record-A and record-B (record-B is a newer one) is copied, and\n> > > - lastSegContRecStart/End points to record-B's, and\n> > > - FlushPtr is proceeded to in the middle of record-A.\n> >\n> > IIUC, that means record-B is a cross segment-border record and we hav e\n> > flushed beyond the recrod-B. In that case crash recovery afterwards\n> > can read the complete record-B and will finish recovery *after* the\n> > record-B. That's what we need here.\n> \n> I'm sorry I didn't explain enough.\n> \n> Record-A and Record-B are cross segment-border records.\n> Record-A spans segment X and X+1\n> Record-B spans segment X+2 and X+3.\n\nOk.\n\n\n> If both records have been inserted to WAL buffer, lastSegContRecStart/End points to Record-B.\n> If a writer flushes upto the middle of segment-X+1, NotifyStableSegments() allows the writer to notify segment-X.\n> Is my understanding correct?\n\nI think that that cannot happen since the segment X must have been\nflushed at the time Record-A is completely flushed out. When we write\nto the next segment, we have already flushed and closed the whole last\nsegment. If it is not the case we are to archive segment files not\nfully flushed, and would get broken archive files.\n\nAm I missing something here?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 Jul 2020 10:57:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hello,\n\n> At Mon, 13 Jul 2020 01:57:36 +0000, \"Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>\" wrote in\n> Am I missing something here?\n\nI write more detail(*).\n\n Record-A and Record-B are cross segment-border records.\n Record-A spans segment X and X+1.\n Record-B spans segment X+2 and X+3.\n If both records have been inserted to WAL buffer, lastSegContRecStart/End points to Record-B\n* If a writer flushes segment X and a part of X+1 but record-A is not flushed completely,\n NotifyStableSegments() allows the writer to notify segment-X.\n\nThen, Record-A may be invalidated by crash-recovery and overwritten by new WAL record.\nThe segment-X is not same as the archived one.\n\nRegard\nRyo Matsumura\n\n\n",
"msg_date": "Wed, 22 Jul 2020 02:53:49 +0000",
"msg_from": "\"matsumura.ryo@fujitsu.com\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On Wed, Jul 22, 2020 at 02:53:49AM +0000, matsumura.ryo@fujitsu.com wrote:\n> Then, Record-A may be invalidated by crash-recovery and overwritten by new WAL record.\n> The segment-X is not same as the archived one.\n\nPlease note that the latest patch fails to apply per the CF bot, so a\nrebase would be in order to have at least some automated tests for the\nlast patch.\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 12:07:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 07/07/2020 12:02, matsumura.ryo@fujitsu.com wrote:\n> At Monday, July 6, 2020 05:13:40 +0000, \"Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>\" wrote in\n>>>> after WAL buffer is filled up to the requested position. So when it\n>>>> crosses segment boundary we know the all past corss segment-boundary\n>>>> records are stable. That means all we need to remember is only the\n>>>> position of the latest corss-boundary record.\n>>>\n>>> I could not agree. In the following case, it may not work well.\n>>> - record-A and record-B (record-B is a newer one) is copied, and\n>>> - lastSegContRecStart/End points to record-B's, and\n>>> - FlushPtr is proceeded to in the middle of record-A.\n>>\n>> IIUC, that means record-B is a cross segment-border record and we hav e\n>> flushed beyond the recrod-B. In that case crash recovery afterwards\n>> can read the complete record-B and will finish recovery *after* the\n>> record-B. That's what we need here.\n> \n> I'm sorry I didn't explain enough.\n> \n> Record-A and Record-B are cross segment-border records.\n> Record-A spans segment X and X+1\n> Record-B spans segment X+2 and X+3.\n> If both records have been inserted to WAL buffer, lastSegContRecStart/End points to Record-B.\n> If a writer flushes upto the middle of segment-X+1, NotifyStableSegments() allows the writer to notify segment-X.\n> Is my understanding correct?\n\nI think this little ASCII drawing illustrates the above scenario:\n\n AAAAA F BBBBB\n|---------|---------|---------|\n seg X seg X+1 seg X+2\n\nAAAAA and BBBBB are Record-A and Record-B. F is the current flush pointer.\n\nIn this case, it would be OK to notify segment X, as long as F is \ngreater than the end of record A. And if I'm reading Kyotaro's patch \ncorrectly, that's what would happen with the patch.\n\nThe patch seems correct to me. I'm a bit sad that we have to track yet \nanother WAL position (two, actually) to fix this, but I don't see a \nbetter way.\n\nI wonder if we should arrange things so that XLogwrtResult.Flush never \npoints in the middle of a record? I'm not totally convinced that all the \ncurrent callers of GetFlushRecPtr() are OK with a middle-of-WAL record \nvalue. Could we get into similar trouble if a standby replicates half of \na cross-segment record to a cascaded standby, and the cascaded standby \nhas WAL archiving enabled?\n\n- Heikki\n\n\n",
"msg_date": "Mon, 12 Oct 2020 15:04:40 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Thanks for visiting this thread.\n\nAt Mon, 12 Oct 2020 15:04:40 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> On 07/07/2020 12:02, matsumura.ryo@fujitsu.com wrote:\n> > At Monday, July 6, 2020 05:13:40 +0000, \"Kyotaro Horiguchi\n> > <horikyota(dot)ntt(at)gmail(dot)com>\" wrote in\n> >>>> after WAL buffer is filled up to the requested position. So when it\n> >>>> crosses segment boundary we know the all past corss segment-boundary\n> >>>> records are stable. That means all we need to remember is only the\n> >>>> position of the latest corss-boundary record.\n> >>>\n> >>> I could not agree. In the following case, it may not work well.\n> >>> - record-A and record-B (record-B is a newer one) is copied, and\n> >>> - lastSegContRecStart/End points to record-B's, and\n> >>> - FlushPtr is proceeded to in the middle of record-A.\n> >>\n> >> IIUC, that means record-B is a cross segment-border record and we hav\n> >> e\n> >> flushed beyond the recrod-B. In that case crash recovery afterwards\n> >> can read the complete record-B and will finish recovery *after* the\n> >> record-B. That's what we need here.\n> > I'm sorry I didn't explain enough.\n> > Record-A and Record-B are cross segment-border records.\n> > Record-A spans segment X and X+1\n> > Record-B spans segment X+2 and X+3.\n> > If both records have been inserted to WAL buffer,\n> > lastSegContRecStart/End points to Record-B.\n> > If a writer flushes upto the middle of segment-X+1,\n> > NotifyStableSegments() allows the writer to notify segment-X.\n> > Is my understanding correct?\n> \n> I think this little ASCII drawing illustrates the above scenario:\n> \n> AAAAA F BBBBB\n> |---------|---------|---------|\n> seg X seg X+1 seg X+2\n> \n> AAAAA and BBBBB are Record-A and Record-B. F is the current flush\n> pointer.\n\nI modified the figure a bit for the explanation below.\n\n F0 F1\n AAAAA F BBBBB\n|---------|---------|---------|\n seg X seg X+1 seg X+2\n\nMatsumura-san has a concern about the case where there are two (or\nmore) partially-flushed segment-spanning records at the same time.\n\nThis patch remembers only the last cross-segment record. If we were\ngoing to flush up to F0 after Record-B had been written, we would fail\nto hold-off archiving seg-X. This patch is based on a assumption that\nthat case cannot happen because we don't leave a pending page at the\ntime of segment switch and no records don't span over three or more\nsegments.\n\n> In this case, it would be OK to notify segment X, as long as F is\n> greater than the end of record A. And if I'm reading Kyotaro's patch\n> correctly, that's what would happen with the patch.\n> \n> The patch seems correct to me. I'm a bit sad that we have to track yet\n> another WAL position (two, actually) to fix this, but I don't see a\n> better way.\n\nIs the two means Record-A and B? Is it needed even with having the\nassumption above?\n\n> I wonder if we should arrange things so that XLogwrtResult.Flush never\n> points in the middle of a record? I'm not totally convinced that all\n\nThat happens at good percentage of page-boundary. And a record can\nspan over three or more pages. Do we need to avoid all such cases?\n\nI did that only for the cross-segment case.\n\n> the current callers of GetFlushRecPtr() are OK with a middle-of-WAL\n> record value. Could we get into similar trouble if a standby\n> replicates half of a cross-segment record to a cascaded standby, and\n> the cascaded standby has WAL archiving enabled?\n\nThe patch includes a fix for primary->standby case. But I'm not sure\nwe can do that in the cascaded case. A standby is not aware of the\nstructure of a WAL blob and has no idea of up-to-where to send the\nreceived blobs. However, if we can rely on the behavior of CopyData\nthat we always receive a blob as a whole sent from the sender at once,\nthe cascaded standbys are free from the trouble (as far as the\ncascaded-standby doesn't crash just before writing the last-half of a\nrecord into pg_wal and after archiving the last full-segment, which\nseems unlikely.).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 14 Oct 2020 09:06:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 14.10.2020 03:06, Kyotaro Horiguchi wrote:\n> The patch includes a fix for primary->standby case. But I'm not sure\n> we can do that in the cascaded case. A standby is not aware of the\n> structure of a WAL blob and has no idea of up-to-where to send the\n> received blobs. However, if we can rely on the behavior of CopyData\n> that we always receive a blob as a whole sent from the sender at once,\n> the cascaded standbys are free from the trouble (as far as the\n> cascaded-standby doesn't crash just before writing the last-half of a\n> record into pg_wal and after archiving the last full-segment, which\n> seems unlikely.).\n>\n> regards.\n>\n\nStatus update for a commitfest entry.\n\nThis entry was \"Waiting on author\" during this CF. As I see, the latest \nmessage contains new version of the patch.\nDoes it need more work? Are you going to continue working on it?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Sun, 29 Nov 2020 22:16:58 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Apologies for the long delay.\r\n\r\nI've spent a good amount of time thinking about this bug and trying\r\nout a few different approaches for fixing it. I've attached a work-\r\nin-progress patch for my latest attempt.\r\n\r\nOn 10/13/20, 5:07 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> F0 F1\r\n> AAAAA F BBBBB\r\n> |---------|---------|---------|\r\n> seg X seg X+1 seg X+2\r\n>\r\n> Matsumura-san has a concern about the case where there are two (or\r\n> more) partially-flushed segment-spanning records at the same time.\r\n>\r\n> This patch remembers only the last cross-segment record. If we were\r\n> going to flush up to F0 after Record-B had been written, we would fail\r\n> to hold-off archiving seg-X. This patch is based on a assumption that\r\n> that case cannot happen because we don't leave a pending page at the\r\n> time of segment switch and no records don't span over three or more\r\n> segments.\r\n\r\nI wonder if these are safe assumptions to make. For your example, if\r\nwe've written record B to the WAL buffers, but neither record A nor B\r\nhave been written to disk or flushed, aren't we still in trouble?\r\nAlso, is there actually any limit on WAL record length that means that\r\nit is impossible for a record to span over three or more segments?\r\nPerhaps these assumptions are true, but it doesn't seem obvious to me\r\nthat they are, and they might be pretty fragile.\r\n\r\nThe attached patch doesn't make use of these assumptions. Instead, we\r\ntrack the positions of the records that cross segment boundaries in a\r\nsmall hash map, and we use that to determine when it is safe to mark a\r\nsegment as ready for archival. I think this approach resembles\r\nMatsumura-san's patch from June.\r\n\r\nAs before, I'm not handling replication, archive_timeout, and\r\npersisting latest-marked-ready through crashes yet. For persisting\r\nthe latest-marked-ready segment through crashes, I was thinking of\r\nusing a new file that stores the segment number.\r\n\r\nNathan",
"msg_date": "Mon, 14 Dec 2020 18:25:23 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Mon, 14 Dec 2020 18:25:23 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> Apologies for the long delay.\n> \n> I've spent a good amount of time thinking about this bug and trying\n> out a few different approaches for fixing it. I've attached a work-\n> in-progress patch for my latest attempt.\n> \n> On 10/13/20, 5:07 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > F0 F1\n> > AAAAA F BBBBB\n> > |---------|---------|---------|\n> > seg X seg X+1 seg X+2\n> >\n> > Matsumura-san has a concern about the case where there are two (or\n> > more) partially-flushed segment-spanning records at the same time.\n> >\n> > This patch remembers only the last cross-segment record. If we were\n> > going to flush up to F0 after Record-B had been written, we would fail\n> > to hold-off archiving seg-X. This patch is based on a assumption that\n> > that case cannot happen because we don't leave a pending page at the\n> > time of segment switch and no records don't span over three or more\n> > segments.\n> \n> I wonder if these are safe assumptions to make. For your example, if\n> we've written record B to the WAL buffers, but neither record A nor B\n> have been written to disk or flushed, aren't we still in trouble?\n\nYou're right in that regard. There's a window where partial record is\nwritten when write location passes F0 after insertion location passes\nF1. However, remembering all spanning records seems overkilling to me.\n\nI modifed the previous patch so that it remembers the start LSN of the\n*oldest* corss-segment continuation record in the last consecutive\nbonded segments, and the end LSN of the latest cross-segmetn\ncontinuation record. This doesn't foreget past segment boundaries.\nThe region is cleard when WAL-write LSN goes beyond the remembered end\nLSN. So the region may contain several wal-segments that are not\nconnected to the next one, but that doesn't matter so much.\n\n\n> Also, is there actually any limit on WAL record length that means that\n> it is impossible for a record to span over three or more segments?\n\nEven though it is not a hard limit, AFAICS as mentioned before the\nlongest possible record is what log_newpages() emits. that is up to\nabout 500kBytes for now. I think we don't want to make the length\nlonger. If we set the wal_segment_size to 1MB and set the block size\nto 16kB or more, we would have a recrod spanning over three or more\nsegments but I don't think that is a sane configuration and that kind\nof issue could happen elsewhere.\n\n> Perhaps these assumptions are true, but it doesn't seem obvious to me\n> that they are, and they might be pretty fragile.\n\nI added an assertion that a record must be shorter than a wal segment\nto XLogRecordAssemble(). This guarantees the assumption to be true?\n(The condition is tentative, would need to be adjusted.)\n\n> The attached patch doesn't make use of these assumptions. Instead, we\n> track the positions of the records that cross segment boundaries in a\n> small hash map, and we use that to determine when it is safe to mark a\n> segment as ready for archival. I think this approach resembles\n> Matsumura-san's patch from June.\n> \n> As before, I'm not handling replication, archive_timeout, and\n> persisting latest-marked-ready through crashes yet. For persisting\n> the latest-marked-ready segment through crashes, I was thinking of\n> using a new file that stores the segment number.\n\n\nAlso, the attached is a PoC.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 15 Dec 2020 19:32:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Tue, 15 Dec 2020 19:32:57 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 14 Dec 2020 18:25:23 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> > I wonder if these are safe assumptions to make. For your example, if\n> > we've written record B to the WAL buffers, but neither record A nor B\n> > have been written to disk or flushed, aren't we still in trouble?\n> \n> You're right in that regard. There's a window where partial record is\n> written when write location passes F0 after insertion location passes\n> F1. However, remembering all spanning records seems overkilling to me.\n> \n> I modifed the previous patch so that it remembers the start LSN of the\n> *oldest* corss-segment continuation record in the last consecutive\n> bonded segments, and the end LSN of the latest cross-segmetn\n> continuation record. This doesn't foreget past segment boundaries.\n> The region is cleard when WAL-write LSN goes beyond the remembered end\n> LSN. So the region may contain several wal-segments that are not\n> connected to the next one, but that doesn't matter so much.\n\nMmm. Even tough it'a PoC, it was too bogus. I fixed it to work saner\nway.\n\n- Record the beginning LSN of the first cross-seg record and the end\n LSN of the last cross-seg recrod in a consecutive segments bonded by\n cross-seg recrods. Spcifically X and Y below.\n\n X Z Y \n [recA] [recB] [recC]\n [seg A] [seg B] [seg C] [seg D] [seg E]\n(1) (2.2) (2.2) (2.1) (2.1) (1)\n\n1. If we wrote upto before X or beyond Y at a segment boundary, notify\n the finished segment immediately.\n\n 1.1. If we have written beyond Y, clear the recorded region.\n\n2. Otherwise we don't notify the segment immediately:\n\n 2.1. If write request was up to exactly the current segment boundary\n and we know the end LSN of the record there (that is, it is recC\n above), extend the request to the end LSN. Then notify the segment\n after the record is written to the end.\n\n 2.2. Otherwise (that is recA or recB), we don't know whether the\n last record of the last segment is ends just at the segment boundary\n (Z) or a record spans between segments (recB). Anyway even if there\n is such a record there, we don't know where it ends. As the result\n what we can do there is only to refrain from notifying. It doesn't\n matter so much since we have already inserted recC so we will soon\n reach recC and will notify up to seg C.\n\nThere might be a case where we insert up to Y before writing up to Z,\nthe segment-region X-Y contains non-connected segment boundary in that\ncase. It is processed as if it is a connected segment\nboundary. However, like 2.2 above, It doesn't matter since we write up\nto Y soon.\n\nAt Tue, 15 Dec 2020 19:32:57 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> I added an assertion that a record must be shorter than a wal segment\nme> to XLogRecordAssemble(). This guarantees the assumption to be true?\nme> (The condition is tentative, would need to be adjusted.)\n\nChanged the assertion more direct way.\n\nme> Also, the attached is a PoC.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 16 Dec 2020 11:01:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 12/15/20, 2:33 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> You're right in that regard. There's a window where partial record is\r\n> written when write location passes F0 after insertion location passes\r\n> F1. However, remembering all spanning records seems overkilling to me.\r\n\r\nI'm curious why you feel that recording all cross-segment records is\r\noverkill. IMO it seems far simpler to just do that rather than try to\r\nreason about all these different scenarios and rely on various\r\n(and possibly fragile) assumptions. You only need to record the end\r\nlocation of records that cross into the next segment (or that fit\r\nperfectly into the end of the current one) and to evaluate which\r\nsegments to mark .ready as the \"flushed\" LSN advances. I'd expect\r\nthat in most cases we wouldn't need to store more than a couple of\r\nrecord boundaries, so it's not like we'd normally be storing dozens of\r\nboundaries. Even if we did need to store several boundaries, AFAICT\r\nthe approach I'm proposing should still work well enough.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 17 Dec 2020 22:20:35 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Thu, 17 Dec 2020 22:20:35 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 12/15/20, 2:33 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > You're right in that regard. There's a window where partial record is\n> > written when write location passes F0 after insertion location passes\n> > F1. However, remembering all spanning records seems overkilling to me.\n> \n> I'm curious why you feel that recording all cross-segment records is\n> overkill. IMO it seems far simpler to just do that rather than try to\n\nSorry, my words are not enough. Remembering all spanning records in\n*shared memory* seems to be overkilling. Much more if it is stored in\nshared hash table. Even though it rarely the case, it can fail hard\nway when reaching the limit. If we could do well by remembering just\ntwo locations, we wouldn't need to worry about such a limitation.\n\n> reason about all these different scenarios and rely on various\n> (and possibly fragile) assumptions. You only need to record the end\n\nAfter the previous mail sent, I noticed that the assumption on\nrecord-length was not needed. So that way no longer need any of the\nassumption^^;\n\n> location of records that cross into the next segment (or that fit\n> perfectly into the end of the current one) and to evaluate which\n> segments to mark .ready as the \"flushed\" LSN advances. I'd expect\n> that in most cases we wouldn't need to store more than a couple of\n> record boundaries, so it's not like we'd normally be storing dozens of\n> boundaries. Even if we did need to store several boundaries, AFAICT\n> the approach I'm proposing should still work well enough.\n\nI didn't say it doesn't work, just overkill.\n\nAnother concern about the concrete patch:\n\nNotifySegmentsReadyForArchive() searches the shared hashacquiaing a\nLWLock every time XLogWrite is called while segment archive is being\nheld off. I don't think it is acceptable and I think it could be a\nproblem when many backends are competing on WAL.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 18 Dec 2020 14:14:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Wed, 16 Dec 2020 11:01:20 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> - Record the beginning LSN of the first cross-seg record and the end\n> LSN of the last cross-seg recrod in a consecutive segments bonded by\n> cross-seg recrods. Spcifically X and Y below.\n> \n> X Z Y \n> [recA] [recB] [recC]\n> [seg A] [seg B] [seg C] [seg D] [seg E]\n> (1) (2.2) (2.2) (2.1) (2.1) (1)\n> \n> 1. If we wrote upto before X or beyond Y at a segment boundary, notify\n> the finished segment immediately.\n> \n> 1.1. If we have written beyond Y, clear the recorded region.\n> \n> 2. Otherwise we don't notify the segment immediately:\n> \n> 2.1. If write request was up to exactly the current segment boundary\n> and we know the end LSN of the record there (that is, it is recC\n> above), extend the request to the end LSN. Then notify the segment\n> after the record is written to the end.\n> \n> 2.2. Otherwise (that is recA or recB), we don't know whether the\n> last record of the last segment is ends just at the segment boundary\n> (Z) or a record spans between segments (recB). Anyway even if there\n> is such a record there, we don't know where it ends. As the result\n> what we can do there is only to refrain from notifying. It doesn't\n> matter so much since we have already inserted recC so we will soon\n> reach recC and will notify up to seg C.\n> \n> There might be a case where we insert up to Y before writing up to Z,\n> the segment-region X-Y contains non-connected segment boundary in that\n> case. It is processed as if it is a connected segment\n> boundary. However, like 2.2 above, It doesn't matter since we write up\n> to Y soon.\n\nI noticed that we can cause the continuation record flushed\nimmedately. So in the attached,\n\n1. If there's no remembered cross-segment boundary or we're out of the\n region X-Y, notify the finished segment immediately.\n\n2. Otherwise we don't notify the segment immedately\n\n 2.1. If we are finishing the last semgment known to continue to the\n next segment, extend write request to the end of the recrod *and*\n force to write then flush up to there.\n\n 2.2. (the same to the above)\n\n3. In the case of 2.1, we can flush the previous segment immediately\n so do that.\n\nX. When we notify a segment, clear the rememberd region if we have got\n out of the region.\n\n\nThe attached is changed in the following points:\n\n- Fixed some bugs that I confusedly refer to write-lsn instead of flush-lsn.\n\n- Changed to urge flushing up to the end of a continuation record, not\n only waiting for the recrod to be written.\n\n- More agressively clear the remembered region.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 18 Dec 2020 14:42:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi!\n\nI was looking to review something in CF. This seems like a thread of some interest to me.\n\nRecently we had somewhat related incident. Do I understand correctly that this incident is related to the bug discussed in this thread?\n\nPrimary instance was killed by OOM\n[ 2020-11-12 15:27:03.732 MSK ,,,739,00000 ]:LOG: server process (PID 40189) was terminated by signal 9: Killed\nafter recovery it archived some WAL segments.\n[ 2020-11-12 15:27:31.477 MSK ,,,739,00000 ]:LOG: database system is ready to accept connections\nINFO: 2020/11/12 15:27:32.059541 FILE PATH: 0000000E0001C02F000000AF.br\nINFO: 2020/11/12 15:27:32.114319 FILE PATH: 0000000E0001C02F000000B3.br\n\nthen PITR failed on another host\n[ 2020-11-12 16:26:33.024 MSK ,,,51414,00000 ]:LOG: restored log file \"0000000E0001C02F000000B3\" from archive\n[ 2020-11-12 16:26:33.042 MSK ,,,51414,00000 ]:LOG: invalid record length at 1C02F/B3FFF778: wanted 24, got 0\n[ 2020-11-12 16:26:33.042 MSK ,,,51414,00000 ]:LOG: invalid record length at 1C02F/B3FFF778: wanted 24, got 0\n\narchived segment has some zeroes at the end\nrmgr: XLOG len (rec/tot): 51/ 1634, tx: 0, lsn: 1C02F/B3FFF058, prev 1C02F/B3FFEFE8, desc: FPI_FOR_HINT , blkref #0: rel 1663/14030/16384 blk 140 FPW\nrmgr: Heap len (rec/tot): 129/ 129, tx: 3890578935, lsn: 1C02F/B3FFF6C0, prev 1C02F/B3FFF058, desc: HOT_UPDATE off 34 xmax 3890578935 ; new off 35 xmax 0, blkref #0: rel 1663/14030/16384 blk 140\nrmgr: Transaction len (rec/tot): 46/ 46, tx: 3890578935, lsn: 1C02F/B3FFF748, prev 1C02F/B3FFF6C0, desc: COMMIT 2020-11-12 15:27:31.507363 MSK\npg_waldump: FATAL: error in WAL record at 1C02F/**B3FFF748**: invalid record length at 1C02F/**B3FFF778**: wanted 24, got 0\n\nMeanwhile next segment points to previous record at **B3FFF748**\npostgres@man-odszl7u4361o8m3z:/tmp$ pg_waldump 0000000E0001C02F000000B4| head\nrmgr: Heap len (rec/tot): 129/ 129, tx: 3890578936, lsn: 1C02F/B4000A68, prev 1C02F/**B3FFF778**, desc: HOT_UPDATE off 35 xmax 3890578936 ; new off 36 xmax 0, blkref #0: rel 1663/14030/16384 blk 140\nrmgr: Transaction len (rec/tot): 46/ 46, tx: 3890578936, lsn: 1C02F/B4000AF0, prev 1C02F/B4000A68, desc: COMMIT 2020-11-12 15:27:32.509443 MSK\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 2 Jan 2021 21:54:39 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi!\n\nThanks for working on this.\n\n> 18 дек. 2020 г., в 10:42, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> \n> I noticed that we can cause the continuation record flushed\n> immedately. \n\nI've took a look into the code and want to share some thoughts.\n\n1. Maybe we could tend to avoid interlacing field protected by different locks in XLogCtlData? We can place lastNotifiedSeg somewhere near field that is protected by WALWriteLock. I'm not sure it's useful idea.\n2. In XLogInsertRecord() we release &XLogCtl->info_lck just to compute few bytes. And possibly aquire it back. Maybe just hold the lock a little longer?\n3. Things that are done by GetLastNotifiedSegment() could just be atomic read? I'm not sure it's common practice.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 6 Jan 2021 11:53:52 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 12/17/20, 9:15 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> At Thu, 17 Dec 2020 22:20:35 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\r\n>> On 12/15/20, 2:33 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n>> > You're right in that regard. There's a window where partial record is\r\n>> > written when write location passes F0 after insertion location passes\r\n>> > F1. However, remembering all spanning records seems overkilling to me.\r\n>>\r\n>> I'm curious why you feel that recording all cross-segment records is\r\n>> overkill. IMO it seems far simpler to just do that rather than try to\r\n>\r\n> Sorry, my words are not enough. Remembering all spanning records in\r\n> *shared memory* seems to be overkilling. Much more if it is stored in\r\n> shared hash table. Even though it rarely the case, it can fail hard\r\n> way when reaching the limit. If we could do well by remembering just\r\n> two locations, we wouldn't need to worry about such a limitation.\r\n\r\nI don't think it will fail if we reach max_size for the hash table.\r\nThe comment above ShmemInitHash() has this note:\r\n\r\n * max_size is the estimated maximum number of hashtable entries. This is\r\n * not a hard limit, but the access efficiency will degrade if it is\r\n * exceeded substantially (since it's used to compute directory size and\r\n * the hash table buckets will get overfull).\r\n\r\n> Another concern about the concrete patch:\r\n>\r\n> NotifySegmentsReadyForArchive() searches the shared hashacquiaing a\r\n> LWLock every time XLogWrite is called while segment archive is being\r\n> held off. I don't think it is acceptable and I think it could be a\r\n> problem when many backends are competing on WAL.\r\n\r\nThis is a fair point. I did some benchmarking with a few hundred\r\nconnections all doing writes, and I was not able to discern any\r\nnoticeable performance impact. My guess is that contention on this\r\nnew lock is unlikely because callers of XLogWrite() must already hold\r\nWALWriteLock. Otherwise, I believe we only acquire ArchNotifyLock no\r\nmore than once per segment to record new record boundaries.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 26 Jan 2021 19:13:57 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 1/2/21, 8:55 AM, \"Andrey Borodin\" <x4mmm@yandex-team.ru> wrote:\r\n> Recently we had somewhat related incident. Do I understand correctly that this incident is related to the bug discussed in this thread?\r\n\r\nI'm not sure that we can rule it out, but the log pattern I've\r\ntypically seen for this is \"invalid contrecord length.\" The issue is\r\nthat we're marking segments as ready for archive when the segment is\r\nfully written versus when its WAL records are fully written (since its\r\nWAL records may cross into the next segment). The fact that you're\r\nseeing zeroes at the end of your archived segments leads me to think\r\nit is unlikely that you are experiencing this bug.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 26 Jan 2021 19:31:04 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Tue, 26 Jan 2021 19:13:57 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 12/17/20, 9:15 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 17 Dec 2020 22:20:35 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\n> >> On 12/15/20, 2:33 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> >> > You're right in that regard. There's a window where partial record is\n> >> > written when write location passes F0 after insertion location passes\n> >> > F1. However, remembering all spanning records seems overkilling to me.\n> >>\n> >> I'm curious why you feel that recording all cross-segment records is\n> >> overkill. IMO it seems far simpler to just do that rather than try to\n> >\n> > Sorry, my words are not enough. Remembering all spanning records in\n> > *shared memory* seems to be overkilling. Much more if it is stored in\n> > shared hash table. Even though it rarely the case, it can fail hard\n> > way when reaching the limit. If we could do well by remembering just\n> > two locations, we wouldn't need to worry about such a limitation.\n> \n> I don't think it will fail if we reach max_size for the hash table.\n> The comment above ShmemInitHash() has this note:\n> \n> * max_size is the estimated maximum number of hashtable entries. This is\n> * not a hard limit, but the access efficiency will degrade if it is\n> * exceeded substantially (since it's used to compute directory size and\n> * the hash table buckets will get overfull).\n\nThat description means that a shared hash has a directory with fixed\nsize thus there may be synonyms, which causes degradation. Even though\nbuckets are preallocated with the specified number, since the minimum\ndirectory size is 256, buckets are allocated at least 256 in a long\nrun. Minimum on-the-fly allocation size is 32. I haven't calcuated\nfurther precicely, but I'm worried about the amount of spare shared\nmemory the hash can allocate.\n\n> > Another concern about the concrete patch:\n> >\n> > NotifySegmentsReadyForArchive() searches the shared hashacquiaing a\n> > LWLock every time XLogWrite is called while segment archive is being\n> > held off. I don't think it is acceptable and I think it could be a\n> > problem when many backends are competing on WAL.\n> \n> This is a fair point. I did some benchmarking with a few hundred\n> connections all doing writes, and I was not able to discern any\n> noticeable performance impact. My guess is that contention on this\n> new lock is unlikely because callers of XLogWrite() must already hold\n> WALWriteLock. Otherwise, I believe we only acquire ArchNotifyLock no\n> more than once per segment to record new record boundaries.\n\nThanks. I agree that the reader-reader contention is not a problem due\nto existing serialization by WALWriteLock. Adding an entry happens\nonly at segment boundary so the ArchNotifyLock doesn't seem to be a\nproblem.\n\nHowever the function prolongs the WALWriteLock section. Couldn't we\nsomehow move the call to NotifySegmentsReadyForArchive in XLogWrite\nout of the WALWriteLock section?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Jan 2021 11:35:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 1/26/21, 6:36 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> At Tue, 26 Jan 2021 19:13:57 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\r\n>> On 12/17/20, 9:15 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n>> > At Thu, 17 Dec 2020 22:20:35 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\r\n>> >> On 12/15/20, 2:33 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n>> >> > You're right in that regard. There's a window where partial record is\r\n>> >> > written when write location passes F0 after insertion location passes\r\n>> >> > F1. However, remembering all spanning records seems overkilling to me.\r\n>> >>\r\n>> >> I'm curious why you feel that recording all cross-segment records is\r\n>> >> overkill. IMO it seems far simpler to just do that rather than try to\r\n>> >\r\n>> > Sorry, my words are not enough. Remembering all spanning records in\r\n>> > *shared memory* seems to be overkilling. Much more if it is stored in\r\n>> > shared hash table. Even though it rarely the case, it can fail hard\r\n>> > way when reaching the limit. If we could do well by remembering just\r\n>> > two locations, we wouldn't need to worry about such a limitation.\r\n>>\r\n>> I don't think it will fail if we reach max_size for the hash table.\r\n>> The comment above ShmemInitHash() has this note:\r\n>>\r\n>> * max_size is the estimated maximum number of hashtable entries. This is\r\n>> * not a hard limit, but the access efficiency will degrade if it is\r\n>> * exceeded substantially (since it's used to compute directory size and\r\n>> * the hash table buckets will get overfull).\r\n>\r\n> That description means that a shared hash has a directory with fixed\r\n> size thus there may be synonyms, which causes degradation. Even though\r\n> buckets are preallocated with the specified number, since the minimum\r\n> directory size is 256, buckets are allocated at least 256 in a long\r\n> run. Minimum on-the-fly allocation size is 32. I haven't calcuated\r\n> further precicely, but I'm worried about the amount of spare shared\r\n> memory the hash can allocate.\r\n\r\nOn my machine, hash_estimate_size() for the table returns 5,968 bytes.\r\nThat estimate is for a max_size of 16. In my testing, I've been able\r\nto need up to 6 elements in this table, but that required turning off\r\nsynchronous_commit, adding a long sleep at the end of XLogWrite(), and\r\nincreasing wal_buffers substantially. This leads me to think that a\r\nmax_size of 16 elements is typically sufficient. (I may have also\r\naccidentally demonstrated that only storing two record boundaries\r\ncould be insufficient.)\r\n\r\n>> > Another concern about the concrete patch:\r\n>> >\r\n>> > NotifySegmentsReadyForArchive() searches the shared hashacquiaing a\r\n>> > LWLock every time XLogWrite is called while segment archive is being\r\n>> > held off. I don't think it is acceptable and I think it could be a\r\n>> > problem when many backends are competing on WAL.\r\n>>\r\n>> This is a fair point. I did some benchmarking with a few hundred\r\n>> connections all doing writes, and I was not able to discern any\r\n>> noticeable performance impact. My guess is that contention on this\r\n>> new lock is unlikely because callers of XLogWrite() must already hold\r\n>> WALWriteLock. Otherwise, I believe we only acquire ArchNotifyLock no\r\n>> more than once per segment to record new record boundaries.\r\n>\r\n> Thanks. I agree that the reader-reader contention is not a problem due\r\n> to existing serialization by WALWriteLock. Adding an entry happens\r\n> only at segment boundary so the ArchNotifyLock doesn't seem to be a\r\n> problem.\r\n>\r\n> However the function prolongs the WALWriteLock section. Couldn't we\r\n> somehow move the call to NotifySegmentsReadyForArchive in XLogWrite\r\n> out of the WALWriteLock section?\r\n\r\nI don't see a clean way to do that. XLogWrite() assumes that\r\nWALWriteLock is held when it is called, and it doesn't release it at\r\nany point. I think we'd have to move NotifySegmentsReadyForArchive()\r\nto the callers of XLogWrite() if we wanted to avoid holding onto\r\nWALWriteLock for longer. Unless we can point to a measurable\r\nperformance penalty, I'm not sure this is worth it.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 27 Jan 2021 20:16:50 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Here is a rebased version of my patch. As before, I'm not yet\r\nhandling replication, archive_timeout, and persisting latest-marked-\r\nready through crashes. If this approach seems reasonable to others,\r\nI'll go ahead and start working on these items.\r\n\r\nNathan",
"msg_date": "Wed, 27 Jan 2021 21:29:08 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Alright, I've attached a new patch set for this.\r\n\r\n0001 is similar to the last patch I sent in this thread, although it\r\ncontains a few fixes. The main difference is that we no longer\r\ninitialize lastNotifiedSeg in StartupXLOG(). Instead, we initialize\r\nit in XLogWrite() where we previously were creating the archive status\r\nfiles. This ensures that standby servers do not create many\r\nunnecessary archive status files after promotion.\r\n\r\n0002 adds logic for persisting the last notified segment through\r\ncrashes. This is needed because a poorly-timed crash could otherwise\r\ncause us to skip marking segments as ready-for-archival altogether.\r\nThis file is only used for primary servers, as there exists a separate\r\ncode path for marking segments as ready-for-archive for standbys.\r\n\r\nI considered attempting to prevent this bug from affecting standby\r\nservers by withholding WAL for a segment until the previous segment\r\nhas been marked ready-for-archival. However, that would require us to\r\ntrack record boundaries even with archiving turned off. Also, my\r\npatch relied on the assumption that the flush pointer advances along\r\nrecord boundaries except for records that span multiple segments.\r\nThis assumption is likely not always true, and even if it is, it seems\r\npretty fragile. Furthermore, I suspect that there are still problems\r\nwith standbys since the code path responsible for creating archive\r\nstatus files on standbys has even less context about the WAL record\r\nboundaries. IMO patches 0001 and 0002 should be the focus for now,\r\nand related bugs for standby servers should be picked up in a new\r\nthread.\r\n\r\nI ended up not touching archive_timeout at all. The documentation for\r\nthis parameter seems to be written ambiguously enough such that any\r\nsmall differences in behavior with these patches is still acceptable.\r\nI don't expect that users will see much change. In the worst case,\r\nthe timer for archive_timeout may get reset a bit before the segment's\r\narchive status file is created.\r\n\r\nNathan",
"msg_date": "Fri, 19 Feb 2021 00:08:04 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2/18/21, 4:10 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> Alright, I've attached a new patch set for this.\r\n>\r\n> 0001 is similar to the last patch I sent in this thread, although it\r\n> contains a few fixes. The main difference is that we no longer\r\n> initialize lastNotifiedSeg in StartupXLOG(). Instead, we initialize\r\n> it in XLogWrite() where we previously were creating the archive status\r\n> files. This ensures that standby servers do not create many\r\n> unnecessary archive status files after promotion.\r\n>\r\n> 0002 adds logic for persisting the last notified segment through\r\n> crashes. This is needed because a poorly-timed crash could otherwise\r\n> cause us to skip marking segments as ready-for-archival altogether.\r\n> This file is only used for primary servers, as there exists a separate\r\n> code path for marking segments as ready-for-archive for standbys.\r\n>\r\n> I considered attempting to prevent this bug from affecting standby\r\n> servers by withholding WAL for a segment until the previous segment\r\n> has been marked ready-for-archival. However, that would require us to\r\n> track record boundaries even with archiving turned off. Also, my\r\n> patch relied on the assumption that the flush pointer advances along\r\n> record boundaries except for records that span multiple segments.\r\n> This assumption is likely not always true, and even if it is, it seems\r\n> pretty fragile. Furthermore, I suspect that there are still problems\r\n> with standbys since the code path responsible for creating archive\r\n> status files on standbys has even less context about the WAL record\r\n> boundaries. IMO patches 0001 and 0002 should be the focus for now,\r\n> and related bugs for standby servers should be picked up in a new\r\n> thread.\r\n>\r\n> I ended up not touching archive_timeout at all. The documentation for\r\n> this parameter seems to be written ambiguously enough such that any\r\n> small differences in behavior with these patches is still acceptable.\r\n> I don't expect that users will see much change. In the worst case,\r\n> the timer for archive_timeout may get reset a bit before the segment's\r\n> archive status file is created.\r\n\r\nI've attached a set of rebased patches.\r\n\r\nNathan",
"msg_date": "Mon, 15 Mar 2021 16:34:29 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Feb-19, Bossart, Nathan wrote:\n\n> 0002 adds logic for persisting the last notified segment through\n> crashes. This is needed because a poorly-timed crash could otherwise\n> cause us to skip marking segments as ready-for-archival altogether.\n> This file is only used for primary servers, as there exists a separate\n> code path for marking segments as ready-for-archive for standbys.\n\nI'm not sure I understand what's the reason not to store this value in\npg_control; I feel like I'm missing something. Can you please explain?\n\nThere were some comments earlier in the thread about the maximum size of\na record. As I recall, you can have records of arbitrary size if you\nhave COMMIT with a large number of relation invalidation messages being\nincluded in the xlog record, or a large number of XIDs of\nsubtransactions in the transaction. Spanning several segments is\npossible, AFAIU.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Tue, 27 Jul 2021 21:04:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 7/27/21, 6:05 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Feb-19, Bossart, Nathan wrote:\r\n>\r\n>> 0002 adds logic for persisting the last notified segment through\r\n>> crashes. This is needed because a poorly-timed crash could otherwise\r\n>> cause us to skip marking segments as ready-for-archival altogether.\r\n>> This file is only used for primary servers, as there exists a separate\r\n>> code path for marking segments as ready-for-archive for standbys.\r\n>\r\n> I'm not sure I understand what's the reason not to store this value in\r\n> pg_control; I feel like I'm missing something. Can you please explain?\r\n\r\nThanks for taking a look.\r\n\r\nThe only reason I can think of is that it could make back-patching\r\ndifficult. I don't mind working on a version of the patch that uses\r\npg_control. Back-patching this fix might be a stretch, anyway.\r\n\r\n> There were some comments earlier in the thread about the maximum size of\r\n> a record. As I recall, you can have records of arbitrary size if you\r\n> have COMMIT with a large number of relation invalidation messages being\r\n> included in the xlog record, or a large number of XIDs of\r\n> subtransactions in the transaction. Spanning several segments is\r\n> possible, AFAIU.\r\n\r\nThis is my understanding, too.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 28 Jul 2021 01:59:57 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Jul-28, Bossart, Nathan wrote:\n\n> On 7/27/21, 6:05 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n\n> > I'm not sure I understand what's the reason not to store this value in\n> > pg_control; I feel like I'm missing something. Can you please explain?\n> \n> Thanks for taking a look.\n> \n> The only reason I can think of is that it could make back-patching\n> difficult. I don't mind working on a version of the patch that uses\n> pg_control. Back-patching this fix might be a stretch, anyway.\n\nHmm ... I'm not sure we're prepared to backpatch this kind of change.\nIt seems a bit too disruptive to how replay works. I think patch we\nshould be focusing solely on patch 0001 to surgically fix the precise\nbug you see. Does patch 0002 exist because you think that a system with\nonly 0001 will not correctly deal with a crash at the right time?\n\n\nNow, the reason I'm looking at this patch series is that we're seeing a\nrelated problem with walsender/walreceiver, which apparently are capable\nof creating a file in the replica that ends up not existing in the\nprimary after a crash, for a reason closely related to what you\ndescribe for WAL archival. I'm not sure what is going on just yet, so\nI'm not going to try and explain because I'm likely to get it wrong.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 30 Jul 2021 14:33:15 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 7/30/21, 11:34 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> Hmm ... I'm not sure we're prepared to backpatch this kind of change.\r\n> It seems a bit too disruptive to how replay works. I think patch we\r\n> should be focusing solely on patch 0001 to surgically fix the precise\r\n> bug you see. Does patch 0002 exist because you think that a system with\r\n> only 0001 will not correctly deal with a crash at the right time?\r\n\r\nYes, that was what I was worried about. However, I just performed a\r\nvariety of tests with just 0001 applied, and I am beginning to suspect\r\nmy concerns were unfounded. With wal_buffers set very high,\r\nsynchronous_commit set to off, and a long sleep at the end of\r\nXLogWrite(), I can reliably cause the archive status files to lag far\r\nbehind the current open WAL segment. However, even if I crash at this\r\ntime, the .ready files are created when the server restarts (albeit\r\nout of order). This appears to be due to the call to\r\nXLogArchiveCheckDone() in RemoveOldXlogFiles(). Therefore, we can\r\nlikely abandon 0002.\r\n\r\n> Now, the reason I'm looking at this patch series is that we're seeing a\r\n> related problem with walsender/walreceiver, which apparently are capable\r\n> of creating a file in the replica that ends up not existing in the\r\n> primary after a crash, for a reason closely related to what you\r\n> describe for WAL archival. I'm not sure what is going on just yet, so\r\n> I'm not going to try and explain because I'm likely to get it wrong.\r\n\r\nI've suspected that this is due to the use of the flushed location for\r\nthe send pointer, which AFAICT needn't align with a WAL record\r\nboundary.\r\n\r\n /*\r\n * Streaming the current timeline on a primary.\r\n *\r\n * Attempt to send all data that's already been written out and\r\n * fsync'd to disk. We cannot go further than what's been written out\r\n * given the current implementation of WALRead(). And in any case\r\n * it's unsafe to send WAL that is not securely down to disk on the\r\n * primary: if the primary subsequently crashes and restarts, standbys\r\n * must not have applied any WAL that got lost on the primary.\r\n */\r\n SendRqstPtr = GetFlushRecPtr();\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 30 Jul 2021 20:25:19 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Jul-30, Bossart, Nathan wrote:\n\n> On 7/30/21, 11:34 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n> > Hmm ... I'm not sure we're prepared to backpatch this kind of change.\n> > It seems a bit too disruptive to how replay works. I think patch we\n> > should be focusing solely on patch 0001 to surgically fix the precise\n> > bug you see. Does patch 0002 exist because you think that a system with\n> > only 0001 will not correctly deal with a crash at the right time?\n> \n> Yes, that was what I was worried about. However, I just performed a\n> variety of tests with just 0001 applied, and I am beginning to suspect\n> my concerns were unfounded. With wal_buffers set very high,\n> synchronous_commit set to off, and a long sleep at the end of\n> XLogWrite(), I can reliably cause the archive status files to lag far\n> behind the current open WAL segment. However, even if I crash at this\n> time, the .ready files are created when the server restarts (albeit\n> out of order). This appears to be due to the call to\n> XLogArchiveCheckDone() in RemoveOldXlogFiles(). Therefore, we can\n> likely abandon 0002.\n\nThat's great to hear. I'll give 0001 a look again.\n\n> > Now, the reason I'm looking at this patch series is that we're seeing a\n> > related problem with walsender/walreceiver, which apparently are capable\n> > of creating a file in the replica that ends up not existing in the\n> > primary after a crash, for a reason closely related to what you\n> > describe for WAL archival. I'm not sure what is going on just yet, so\n> > I'm not going to try and explain because I'm likely to get it wrong.\n> \n> I've suspected that this is due to the use of the flushed location for\n> the send pointer, which AFAICT needn't align with a WAL record\n> boundary.\n\nYeah, I had gotten as far as the GetFlushRecPtr but haven't tracked down\nwhat happens with a contrecord.\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 30 Jul 2021 18:22:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 7/30/21, 3:23 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> That's great to hear. I'll give 0001 a look again.\r\n\r\nMuch appreciated.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 30 Jul 2021 22:28:41 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Jul-30, Bossart, Nathan wrote:\n\n> Yes, that was what I was worried about. However, I just performed a\n> variety of tests with just 0001 applied, and I am beginning to suspect\n> my concerns were unfounded. With wal_buffers set very high,\n> synchronous_commit set to off, and a long sleep at the end of\n> XLogWrite(), I can reliably cause the archive status files to lag far\n> behind the current open WAL segment. However, even if I crash at this\n> time, the .ready files are created when the server restarts (albeit\n> out of order).\n\nI think that creating files out of order might be problematic. On the\narchiver side, pgarch_readyXlog() expects to return the oldest\narchivable file; but if we create a newer segment's .ready file first,\nit is possible that a directory scan would return that newer file before\nthe older segment's .ready file appears.\n\nHowever, the comments in pgarch_readyXlog() aren't super convincing that\nprocessing the files in order is actually a correctness requirement, so\nperhaps it doesn't matter all that much.\n\n\nI noticed that XLogCtl->lastNotifiedSeg is protected by both the\ninfo_lck and ArchNotifyLock. I think it it's going to be protected by\nthe lwlock, then we should drop the use of the spinlock.\n\n\nWe set archiver's latch on each XLogArchiveNotify(), but if we're doing\nit in a loop such as in NotifySegmentsReadyForArchive() perhaps it is\nbetter to create all the .ready files first and do PgArchWakeup() at the\nend. I'm not convinced that this is useful but let's at least discard\nthe idea explicitly if not.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)",
"msg_date": "Fri, 30 Jul 2021 19:51:39 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Jul-30, Alvaro Herrera wrote:\n\n> We set archiver's latch on each XLogArchiveNotify(), but if we're doing\n> it in a loop such as in NotifySegmentsReadyForArchive() perhaps it is\n> better to create all the .ready files first and do PgArchWakeup() at the\n> end. I'm not convinced that this is useful but let's at least discard\n> the idea explicitly if not.\n\nhm, this causes an ABI change so it's not backpatchable.\n\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n",
"msg_date": "Fri, 30 Jul 2021 23:17:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 7/30/21, 4:52 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> I think that creating files out of order might be problematic. On the\r\n> archiver side, pgarch_readyXlog() expects to return the oldest\r\n> archivable file; but if we create a newer segment's .ready file first,\r\n> it is possible that a directory scan would return that newer file before\r\n> the older segment's .ready file appears.\r\n>\r\n> However, the comments in pgarch_readyXlog() aren't super convincing that\r\n> processing the files in order is actually a correctness requirement, so\r\n> perhaps it doesn't matter all that much.\r\n\r\nI can't think of a reason it'd be needed from a correctness\r\nperspective. After a quick scan, I couldn't find any promises about\r\narchival order in the documentation, either. In any case, it doesn't\r\nlook like there's a risk that the archiver will skip files when the\r\n.ready files are created out of order.\r\n\r\n> I noticed that XLogCtl->lastNotifiedSeg is protected by both the\r\n> info_lck and ArchNotifyLock. I think it it's going to be protected by\r\n> the lwlock, then we should drop the use of the spinlock.\r\n\r\nThat seems reasonable to me. This means that the lock is acquired at\r\nthe end of every XLogWrite(), but the other places that acquire the\r\nlock only do so once per WAL segment. Plus, the call to\r\nNotifySegmentsReadyForArchive() at the end of every XLogWrite() should\r\nusually only need the lock for a short amount of time to retrieve a\r\nvalue from shared memory.\r\n\r\n> We set archiver's latch on each XLogArchiveNotify(), but if we're doing\r\n> it in a loop such as in NotifySegmentsReadyForArchive() perhaps it is\r\n> better to create all the .ready files first and do PgArchWakeup() at the\r\n> end. I'm not convinced that this is useful but let's at least discard\r\n> the idea explicitly if not.\r\n\r\nI don't have a terribly strong opinion, but I would lean towards\r\nsetting the latch for each call to XLogArchiveNotify() so that the\r\narchiver process can get started as soon as a segment is ready.\r\nHowever, I doubt that holding off until the end of the loop has any\r\ndiscernible effect in most cases.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 31 Jul 2021 04:05:07 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Jul-31, Bossart, Nathan wrote:\n\n> On 7/30/21, 4:52 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n\n> > I noticed that XLogCtl->lastNotifiedSeg is protected by both the\n> > info_lck and ArchNotifyLock. I think it it's going to be protected by\n> > the lwlock, then we should drop the use of the spinlock.\n> \n> That seems reasonable to me. This means that the lock is acquired at\n> the end of every XLogWrite(),\n\nUhh, actually that there sounds really bad; it's going to be a serious\ncontention point.\n\nAnother option might be to make it an atomic.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 31 Jul 2021 12:10:44 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 7/31/21, 9:12 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Jul-31, Bossart, Nathan wrote:\r\n>\r\n>> On 7/30/21, 4:52 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n>\r\n>> > I noticed that XLogCtl->lastNotifiedSeg is protected by both the\r\n>> > info_lck and ArchNotifyLock. I think it it's going to be protected by\r\n>> > the lwlock, then we should drop the use of the spinlock.\r\n>>\r\n>> That seems reasonable to me. This means that the lock is acquired at\r\n>> the end of every XLogWrite(),\r\n>\r\n> Uhh, actually that there sounds really bad; it's going to be a serious\r\n> contention point.\r\n>\r\n> Another option might be to make it an atomic.\r\n\r\nThis is why I was trying to get away with just using info_lck for\r\nreading lastNotifiedSeg. ArchNotifyLock is mostly intended to protect\r\nRecordBoundaryMap. However, since lastNotifiedSeg is used in\r\nfunctions like GetLatestRecordBoundarySegment() that access the map, I\r\nfound it easier to reason about things if I knew that it wouldn't\r\nchange as long as I held ArchNotifyLock.\r\n\r\nI think the main downside of making lastNotifiedSeg an atomic is that\r\nthe value we first read in NotifySegmentsReadyForArchive() might not\r\nbe up-to-date, which means we might hold off creating .ready files\r\nlonger than necessary.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 31 Jul 2021 17:53:02 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Jul-31, Bossart, Nathan wrote:\n\n> This is why I was trying to get away with just using info_lck for\n> reading lastNotifiedSeg. ArchNotifyLock is mostly intended to protect\n> RecordBoundaryMap. However, since lastNotifiedSeg is used in\n> functions like GetLatestRecordBoundarySegment() that access the map, I\n> found it easier to reason about things if I knew that it wouldn't\n> change as long as I held ArchNotifyLock.\n\nI think it's okay to make lastNotifiedSeg protected by just info_lck,\nand RecordBoundaryMap protected by just ArchNotifyLock. It's okay to\nacquire the spinlock inside the lwlock-protected area, as long as we\nmake sure never to do the opposite. (And we sure don't want to hold\ninfo_lck long enough that a LWLock acquisition would occur in the\nmeantime). So I modified things that way, and also added another\nfunction to set the seg if it's unset, with a single spinlock\nacquisition (rather than acqquire, read, release, acqquire, set,\nrelease, which sounds like it would have trouble behaving.)\n\nI haven't tried your repro with this yet.\n\nI find it highly suspicious that the patch does an archiver notify (i.e.\ncreation of the .ready file) in XLogInsertRecord(). Is that a sane\nthing to do? Sounds to me like that should be attempted in XLogFlush\nonly. This appeared after Kyotaro's patch at [1] and before your patch\nat [2].\n\n[1] https://postgr.es/m/20201014.090628.839639906081252194.horikyota.ntt@gmail.com\n[2] https://postgr.es/m/EFF40306-8E8A-4259-B181-C84F3F06636C@amazon.com\n\nI also just realized that Kyotaro's patch there also tried to handle the\nstreaming replication issue I was talking about.\n\n> I think the main downside of making lastNotifiedSeg an atomic is that\n> the value we first read in NotifySegmentsReadyForArchive() might not\n> be up-to-date, which means we might hold off creating .ready files\n> longer than necessary.\n\nI'm not sure I understand how this would be a problem. If we block\nsomebody from setting a newer value, they'll just set the value\nimmediately after we release the lock. Will we reread the value\nafterwards to see if it changed?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 2 Aug 2021 17:41:39 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/2/21, 2:42 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> I think it's okay to make lastNotifiedSeg protected by just info_lck,\r\n> and RecordBoundaryMap protected by just ArchNotifyLock. It's okay to\r\n> acquire the spinlock inside the lwlock-protected area, as long as we\r\n> make sure never to do the opposite. (And we sure don't want to hold\r\n> info_lck long enough that a LWLock acquisition would occur in the\r\n> meantime). So I modified things that way, and also added another\r\n> function to set the seg if it's unset, with a single spinlock\r\n> acquisition (rather than acqquire, read, release, acqquire, set,\r\n> release, which sounds like it would have trouble behaving.)\r\n\r\nThe patch looks good to me.\r\n\r\n> I find it highly suspicious that the patch does an archiver notify (i.e.\r\n> creation of the .ready file) in XLogInsertRecord(). Is that a sane\r\n> thing to do? Sounds to me like that should be attempted in XLogFlush\r\n> only. This appeared after Kyotaro's patch at [1] and before your patch\r\n> at [2].\r\n\r\nI believe my worry was that we'd miss notifying a segment as soon as\r\npossible if the record was somehow flushed prior to registering the\r\nrecord boundary in the map. If that's actually impossible, then I\r\nwould agree that the extra call to NotifySegmentsReadyForArchive() is\r\nunnecessary.\r\n\r\n>> I think the main downside of making lastNotifiedSeg an atomic is that\r\n>> the value we first read in NotifySegmentsReadyForArchive() might not\r\n>> be up-to-date, which means we might hold off creating .ready files\r\n>> longer than necessary.\r\n>\r\n> I'm not sure I understand how this would be a problem. If we block\r\n> somebody from setting a newer value, they'll just set the value\r\n> immediately after we release the lock. Will we reread the value\r\n> afterwards to see if it changed?\r\n\r\nI think you are right. If we see an old value for lastNotifiedSeg,\r\nthe worst case is that we take the ArchNotifyLock, read\r\nlastNotifiedSeg again (which should then be up-to-date), and then\r\nbasically do nothing. I suspect initializing lastNotifiedSeg might\r\nstill be a little tricky, though. Do you think it is important to try\r\nto avoid this spinlock for lastNotifiedSeg? IIUC it's acquired at the\r\nend of every call to XLogWrite() already, and we'd still need to\r\nacquire it for the flush pointer, anyway.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 2 Aug 2021 23:28:19 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Mon, 2 Aug 2021 23:28:19 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/2/21, 2:42 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n> > I think it's okay to make lastNotifiedSeg protected by just info_lck,\n> > and RecordBoundaryMap protected by just ArchNotifyLock. It's okay to\n> > acquire the spinlock inside the lwlock-protected area, as long as we\n> > make sure never to do the opposite. (And we sure don't want to hold\n> > info_lck long enough that a LWLock acquisition would occur in the\n> > meantime). So I modified things that way, and also added another\n> > function to set the seg if it's unset, with a single spinlock\n> > acquisition (rather than acqquire, read, release, acqquire, set,\n> > release, which sounds like it would have trouble behaving.)\n> \n> The patch looks good to me.\n\n+\tfor (seg = flushed_seg; seg > last_notified; seg--)\n+\t{\n+\t\tRecordBoundaryEntry *entry;\n+\n+\t\tentry = (RecordBoundaryEntry *) hash_search(RecordBoundaryMap,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t(void *) &seg, HASH_FIND,\n\nI'm afraid that using hash to store boundary info is too much. Isn't a\nring buffer enough for this use? In that case it is enough to\nremember only the end LSN of the segment spanning records. It is easy\nto expand the buffer if needed.\n\n+\tif (!XLogSegNoIsInvalid(latest_boundary_seg))\n\nIt is a matter of taste, but I see latest_boundary_seg !=\nInvalidXLogSegNo more frequentlyl, maybe to avoid double negation.\n\n\n@@ -1167,10 +1195,33 @@ XLogInsertRecord(XLogRecData *rdata,\n \t\tSpinLockRelease(&XLogCtl->info_lck);\n \t}\n \n+\t/*\n+\t * Record the record boundary if we crossed the segment boundary. This is\n...\n+\tXLByteToSeg(StartPos, StartSeg, wal_segment_size);\n+\tXLByteToSeg(EndPos, EndSeg, wal_segment_size);\n+\n+\tif (StartSeg != EndSeg && XLogArchivingActive())\n+\t{\n\nThe immediately prceding if block is for cross-page records. So we can\nreduce the overhaed by the above calculations by moving it to the\npreceding if-block.\n\n\n+RegisterRecordBoundaryEntry(XLogSegNo seg, XLogRecPtr pos)\n\nThe seg is restricted to the segment that pos resides on. The caller\nis free from caring that restriction if the function takes only pos.\nIt adds a small overhead to calculate segment number from the LSN but\nI think it doesn't matter so much. (Or if we don't use hash, that\ncalculation is not required at all).\n\n\n@@ -2602,7 +2855,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n \t\t\t\tLogwrtResult.Flush = LogwrtResult.Write;\t/* end of page */\n \n \t\t\t\tif (XLogArchivingActive())\n-\t\t\t\t\tXLogArchiveNotifySeg(openLogSegNo);\n+\t\t\t\t\tSetLastNotifiedSegmentIfInvalid(openLogSegNo - 1);\n\nIs it safe? If server didn't notified of WAL files for recent 3\nfinished segments in the previous server life, they need to be\narchived this life time. But this omits maybe all of the tree.\n(I didn't confirm that behavior..)\n\n> > I find it highly suspicious that the patch does an archiver notify (i.e.\n> > creation of the .ready file) in XLogInsertRecord(). Is that a sane\n> > thing to do? Sounds to me like that should be attempted in XLogFlush\n> > only. This appeared after Kyotaro's patch at [1] and before your patch\n> > at [2].\n> \n> I believe my worry was that we'd miss notifying a segment as soon as\n> possible if the record was somehow flushed prior to registering the\n> record boundary in the map. If that's actually impossible, then I\n> would agree that the extra call to NotifySegmentsReadyForArchive() is\n> unnecessary.\n\nI don't think that XLogWrite(up to LSN=X) can happen before\nXLogInsert(endpos = X) ends.\n\n> >> I think the main downside of making lastNotifiedSeg an atomic is that\n> >> the value we first read in NotifySegmentsReadyForArchive() might not\n> >> be up-to-date, which means we might hold off creating .ready files\n> >> longer than necessary.\n> >\n> > I'm not sure I understand how this would be a problem. If we block\n> > somebody from setting a newer value, they'll just set the value\n> > immediately after we release the lock. Will we reread the value\n> > afterwards to see if it changed?\n> \n> I think you are right. If we see an old value for lastNotifiedSeg,\n> the worst case is that we take the ArchNotifyLock, read\n> lastNotifiedSeg again (which should then be up-to-date), and then\n\nAgreed.\n\n> basically do nothing. I suspect initializing lastNotifiedSeg might\n> still be a little tricky, though. Do you think it is important to try\n> to avoid this spinlock for lastNotifiedSeg? IIUC it's acquired at the\n> end of every call to XLogWrite() already, and we'd still need to\n> acquire it for the flush pointer, anyway.\n\nAs mentioned above, I think it needs more cosideration.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 03 Aug 2021 11:37:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/2/21, 7:37 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> I'm afraid that using hash to store boundary info is too much. Isn't a\r\n> ring buffer enough for this use? In that case it is enough to\r\n> remember only the end LSN of the segment spanning records. It is easy\r\n> to expand the buffer if needed.\r\n\r\nI agree that the hash table requires a bit more memory than what is\r\nprobably necessary, but I'm not sure I agree that maintaining a custom\r\ndata structure to save a few kilobytes of memory is worth the effort.\r\n\r\n> @@ -2602,7 +2855,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\r\n> LogwrtResult.Flush = LogwrtResult.Write; /* end of page */\r\n>\r\n> if (XLogArchivingActive())\r\n> - XLogArchiveNotifySeg(openLogSegNo);\r\n> + SetLastNotifiedSegmentIfInvalid(openLogSegNo - 1);\r\n>\r\n> Is it safe? If server didn't notified of WAL files for recent 3\r\n> finished segments in the previous server life, they need to be\r\n> archived this life time. But this omits maybe all of the tree.\r\n> (I didn't confirm that behavior..)\r\n\r\nI tested this scenario out earlier [0]. It looks like the call to\r\nXLogArchiveCheckDone() in RemoveOldXlogFiles() will take care of\r\ncreating any .ready files we missed.\r\n\r\n>> I believe my worry was that we'd miss notifying a segment as soon as\r\n>> possible if the record was somehow flushed prior to registering the\r\n>> record boundary in the map. If that's actually impossible, then I\r\n>> would agree that the extra call to NotifySegmentsReadyForArchive() is\r\n>> unnecessary.\r\n>\r\n> I don't think that XLogWrite(up to LSN=X) can happen before\r\n> XLogInsert(endpos = X) ends.\r\n\r\nIs there anything preventing that from happening? At the location\r\nwhere we are registering the record boundary, we've already called\r\nCopyXLogRecordToWAL(), and neither the WAL insertion lock nor the\r\nWALWriteLock are held. Even if we register the boundary before\r\nupdating the shared LogwrtRqst.Write, there's a chance that someone\r\nelse has already moved it ahead and called XLogWrite(). I think the\r\nworst case scenario is that we hold off creating .ready files longer\r\nthan necessary, but IMO that's still a worthwhile thing to do.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/DA71434B-7340-4984-9B91-F085BC47A778%40amazon.com\r\n\r\n",
"msg_date": "Tue, 3 Aug 2021 21:32:18 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Tue, 3 Aug 2021 21:32:18 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/2/21, 7:37 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > I'm afraid that using hash to store boundary info is too much. Isn't a\n> > ring buffer enough for this use? In that case it is enough to\n> > remember only the end LSN of the segment spanning records. It is easy\n> > to expand the buffer if needed.\n> \n> I agree that the hash table requires a bit more memory than what is\n> probably necessary, but I'm not sure I agree that maintaining a custom\n> data structure to save a few kilobytes of memory is worth the effort.\n\nMemory is one of my concerns but more significant point was required\nCPU cycles by GetLatestRecordBoundarySegment. So I don't mind it is\nusing a hash if the loop on the hash didn't block other backends.\n\nAddition to that, while NotifySegmentsReadyForArchive() is notifying\npending segments, other backends simultaneously reach there are\nblocked until the notification, incuding file creation, finishes. I\ndon't think that's great. Couldn't we set lastNotifiedSegment before\nthe loop? At the moment a backend decides to notify some segments,\nothers no longer need to consider those segments. Even if the backend\ncrashes meanwhile, as you mentionied below, it's safe since the\nunnotified segments are notifed after restart.\n\n> > @@ -2602,7 +2855,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n> > LogwrtResult.Flush = LogwrtResult.Write; /* end of page */\n> >\n> > if (XLogArchivingActive())\n> > - XLogArchiveNotifySeg(openLogSegNo);\n> > + SetLastNotifiedSegmentIfInvalid(openLogSegNo - 1);\n> >\n> > Is it safe? If server didn't notified of WAL files for recent 3\n> > finished segments in the previous server life, they need to be\n> > archived this life time. But this omits maybe all of the tree.\n> > (I didn't confirm that behavior..)\n> \n> I tested this scenario out earlier [0]. It looks like the call to\n> XLogArchiveCheckDone() in RemoveOldXlogFiles() will take care of\n> creating any .ready files we missed.\n\nYeah, I reclled of that behvaior. In that case crash recovery reads up\nto just before the last (continued) record in the last finished\nsegment. On the other hand if creash recovery was able to read that\nrecord, it's safe to archive the last segment immediately after\nrecovery. So that behavior is safe. Thanks!\n\n> >> I believe my worry was that we'd miss notifying a segment as soon as\n> >> possible if the record was somehow flushed prior to registering the\n> >> record boundary in the map. If that's actually impossible, then I\n> >> would agree that the extra call to NotifySegmentsReadyForArchive() is\n> >> unnecessary.\n> >\n> > I don't think that XLogWrite(up to LSN=X) can happen before\n> > XLogInsert(endpos = X) ends.\n> \n> Is there anything preventing that from happening? At the location\n> where we are registering the record boundary, we've already called\n> CopyXLogRecordToWAL(), and neither the WAL insertion lock nor the\n> WALWriteLock are held. Even if we register the boundary before\n> updating the shared LogwrtRqst.Write, there's a chance that someone\n> else has already moved it ahead and called XLogWrite(). I think the\n> worst case scenario is that we hold off creating .ready files longer\n> than necessary, but IMO that's still a worthwhile thing to do.\n\nOh, boundary registration happens actually after an insertion ends\n(but before XLogInsert ends). The missed segment is never processed\ndue to the qualification by lastNotifiedSeg.\n\nDoes it work that RegisterRecordBoundaryEntry omits registering of the\nbounary if it finds lastNotifiedSeg have gone too far?\n\n> Nathan\n> \n> [0] https://postgr.es/m/DA71434B-7340-4984-9B91-F085BC47A778%40amazon.com\n> \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Aug 2021 10:57:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "By the way about the v3 patch,\n\n+#define InvalidXLogSegNo\t((XLogSegNo) 0xFFFFFFFFFFFFFFFF)\n\nLike InvalidXLogRecPtr, the first valid segment number is 1 so we can\nuse 0 as InvalidXLogSegNo.\n\nBootStrapXLOG():\n\t/* Create first XLOG segment file */\n\topenLogFile = XLogFileInit(1);\n\nKeepLogSeg():\n\t\t\t/* avoid underflow, don't go below 1 */\n\t\t\tif (currSegNo <= keep_segs)\n\t\t\t\tsegno = 1;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Aug 2021 13:04:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/4/21, 6:58 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> Addition to that, while NotifySegmentsReadyForArchive() is notifying\r\n> pending segments, other backends simultaneously reach there are\r\n> blocked until the notification, incuding file creation, finishes. I\r\n> don't think that's great. Couldn't we set lastNotifiedSegment before\r\n> the loop? At the moment a backend decides to notify some segments,\r\n> others no longer need to consider those segments. Even if the backend\r\n> crashes meanwhile, as you mentionied below, it's safe since the\r\n> unnotified segments are notifed after restart.\r\n\r\nThat seems reasonable to me. It looks like we rely on\r\nRemoveOldXlogFiles() even today for when XLogArchiveNotify() fails. I\r\nupdated this in v4 of the patch.\r\n\r\nIn addition to this change, I also addressed your other feedback by\r\nchanging XLogSegNoIsInvalid() to XLogSegNoIsValid() and by moving\r\nrecord boundary registration to the \"if\" block for cross-page records.\r\n\r\n> Does it work that RegisterRecordBoundaryEntry omits registering of the\r\n> bounary if it finds lastNotifiedSeg have gone too far?\r\n\r\nYeah, there's no reason to add a record boundary if we've already\r\nnotified the prior segment. For that to happen, another cross-segment\r\nrecord would have to be flushed to disk and\r\nNotifySegmentsReadyForArchive() would have to be called before\r\nregistering the boundary. With that being said, I don't expect an\r\nextra map entry here and there to impact performance enough for us to\r\nworry about it.\r\n\r\nNathan",
"msg_date": "Thu, 5 Aug 2021 05:14:01 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/4/21, 9:05 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> By the way about the v3 patch,\r\n>\r\n> +#define InvalidXLogSegNo ((XLogSegNo) 0xFFFFFFFFFFFFFFFF)\r\n>\r\n> Like InvalidXLogRecPtr, the first valid segment number is 1 so we can\r\n> use 0 as InvalidXLogSegNo.\r\n\r\nIt's been a while since I wrote this, but if I remember correctly, the\r\nissue with using 0 is that we could end up initializing\r\nlastNotifiedSeg to InvalidXLogSegNo in XLogWrite(). Eventually, we'd\r\ninitialize it to 1, but we will have skipped creating the .ready file\r\nfor the first segment.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 5 Aug 2021 05:15:04 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Thu, 5 Aug 2021 05:15:04 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/4/21, 9:05 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > By the way about the v3 patch,\n> >\n> > +#define InvalidXLogSegNo ((XLogSegNo) 0xFFFFFFFFFFFFFFFF)\n> >\n> > Like InvalidXLogRecPtr, the first valid segment number is 1 so we can\n> > use 0 as InvalidXLogSegNo.\n> \n> It's been a while since I wrote this, but if I remember correctly, the\n> issue with using 0 is that we could end up initializing\n> lastNotifiedSeg to InvalidXLogSegNo in XLogWrite(). Eventually, we'd\n> initialize it to 1, but we will have skipped creating the .ready file\n> for the first segment.\n\nMaybe this?\n\n+\t\t\t\t\tSetLastNotifiedSegmentIfInvalid(openLogSegNo - 1);\n\nHmm. Theoretically 0 is invalid as segment number. So we'd better not\nusing 0 as a valid value of lastNotifiedSeg.\n\nHonestly I don't like having this initialization in XLogWrite. We\nshould and I think can initialize it earlier. It seems to me the most\nappropriate timing to initialize the variable is just before running\nthe end-of-recovery checkpoint). Since StartupXLOG knows the first\nsegment to write . If it were set to 0, that doesn't matter at all.\nWe can get rid of the new symbol by doing this.\n\nMaybe something like this:\n\n>\t{\n>\t\t/*\n>\t\t * There is no partial block to copy. Just set InitializedUpTo, and\n>\t\t * let the first attempt to insert a log record to initialize the next\n>\t\t * buffer.\n>\t\t */\n>\t\tXLogCtl->InitializedUpTo = EndOfLog;\n>\t}\n> \n+\t/*\n+\t * EndOfLog resides on the next segment of the last finished one. Set the\n+\t * last finished segment as lastNotifiedSeg now. In the case where the\n+\t * last crash has left the last several segments not being marked as\n+\t * .ready, the checkpoint just after does that for all finished segments.\n+\t * There's a corner case where the checkpoint advances segment, but that\n+\t * ends up at most with a duplicate archive notification.\n+\t */\n+\tXLByteToSeg(EndOfLog, EndOfLogSeg, wal_segment_size);\n+\tAssert(EndOfLogSeg > 0);\n+\tSetLastNotifiedSegment(EndOfLogSeg - 1);\n+\n> \tLogwrtResult.Write = LogwrtResult.Flush = EndOfLog;\n\nDoes this makes sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n",
"msg_date": "Thu, 05 Aug 2021 16:32:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/5/21, 12:39 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> Honestly I don't like having this initialization in XLogWrite. We\r\n> should and I think can initialize it earlier. It seems to me the most\r\n> appropriate timing to initialize the variable is just before running\r\n> the end-of-recovery checkpoint). Since StartupXLOG knows the first\r\n> segment to write . If it were set to 0, that doesn't matter at all.\r\n> We can get rid of the new symbol by doing this.\r\n\r\nThis seems like a good idea to me. I made this change in v5. I\r\nperformed some basic testing, and it seems to reliably initialize\r\nlastNotifiedSeg correctly.\r\n\r\n> +\t/*\r\n> +\t * EndOfLog resides on the next segment of the last finished one. Set the\r\n> +\t * last finished segment as lastNotifiedSeg now. In the case where the\r\n> +\t * last crash has left the last several segments not being marked as\r\n> +\t * .ready, the checkpoint just after does that for all finished segments.\r\n> +\t * There's a corner case where the checkpoint advances segment, but that\r\n> +\t * ends up at most with a duplicate archive notification.\r\n> +\t */\r\n\r\nI'm not quite following the corner case you've described here. Is it\r\npossible that the segment that EndOfLog points to will be eligible for\r\nremoval after the checkpoint?\r\n\r\nIn v5 of the patch, I've also added an extra call to\r\nNotifySegmentsReadyForArchive() in the same place we previously\r\ncreated the .ready files. I think this helps notify archiver sooner\r\nin certain cases (e.g., asynchronous commit).\r\n\r\nNathan",
"msg_date": "Fri, 6 Aug 2021 00:21:34 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "At Fri, 6 Aug 2021 00:21:34 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/5/21, 12:39 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > Honestly I don't like having this initialization in XLogWrite. We\n> > should and I think can initialize it earlier. It seems to me the most\n> > appropriate timing to initialize the variable is just before running\n> > the end-of-recovery checkpoint). Since StartupXLOG knows the first\n> > segment to write . If it were set to 0, that doesn't matter at all.\n> > We can get rid of the new symbol by doing this.\n> \n> This seems like a good idea to me. I made this change in v5. I\n> performed some basic testing, and it seems to reliably initialize\n> lastNotifiedSeg correctly.\n> \n> > +\t/*\n> > +\t * EndOfLog resides on the next segment of the last finished one. Set the\n> > +\t * last finished segment as lastNotifiedSeg now. In the case where the\n> > +\t * last crash has left the last several segments not being marked as\n> > +\t * .ready, the checkpoint just after does that for all finished segments.\n> > +\t * There's a corner case where the checkpoint advances segment, but that\n> > +\t * ends up at most with a duplicate archive notification.\n> > +\t */\n> \n> I'm not quite following the corner case you've described here. Is it\n> possible that the segment that EndOfLog points to will be eligible for\n> removal after the checkpoint?\n\nArchiving doesn't immediately mean removal. A finished segment is\nought to be archived right away. Since the EndOfLog segment must not\nget marked .ready, setting lastNotifiedSeg to the previous segment is\nquite right, but if the end-of-recovery checkpoint advances segment,\nEndOfLog is marked .ready at the XLogFlush just after. But, sorry,\nwhat I forgot at the time was the checkpoint also moves\nlastNotifiedSeg. So, sorry, that corner case does not exist.\n\n> In v5 of the patch, I've also added an extra call to\n> NotifySegmentsReadyForArchive() in the same place we previously\n> created the .ready files. I think this helps notify archiver sooner\n> in certain cases (e.g., asynchronous commit).\n\nIn v5, NotifySegmentsReadyForArchive() still holds ArchNotifyLock\nincluding .ready file creations. Since the notification loop doesn't\nneed the hash itself, the loop can be took out of the lock section?\n\ncurrent:\n\tLWLockAcquire(ArchNotifyLock, LW_EXCLUSIVE);\n\tlast_notified = GetLastNotifiedSegment();\n\tlatest_boundary_seg = GetLatestRecordBoundarySegment(last_notified, flushed, &found);\n\n if (found)\n\t{\n\t\tSetLastNotifiedSegment(latest_boundary_seg - 1);\n\t\tfor (seg = last_notified + 1; seg < latest_boundary_seg; seg++)\n\t\t\tXLogArchiveNotifySeg(seg, false);\n\n\t\tRemoveRecordBoundariesUpTo(latest_boundary_seg);\n\n\t\tPgArchWakeup();\n\t}\n\tLWLockRelease(ArchNotifyLock);\n\nBut we can release the lock earlier.\n\n\tLWLockAcquire(ArchNotifyLock, LW_EXCLUSIVE);\n\tlast_notified = GetLastNotifiedSegment();\n\tlatest_boundary_seg = GetLatestRecordBoundarySegment(last_notified, flushed, &found);\n\n if (found)\n\t{\n\t\tSetLastNotifiedSegment(latest_boundary_seg - 1);\n\t\tRemoveRecordBoundariesUpTo(latest_boundary_seg);\n\t}\n\tLWLockRelease(ArchNotifyLock);\n\n\tif (found)\n\t{\n for (seg = last_notified + 1; seg < latest_boundary_seg; seg++)\n\t\t\tXLogArchiveNotifySeg(seg, false);\n\n\t\tPgArchWakeup();\n\t}\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 06 Aug 2021 16:41:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/6/21, 12:42 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> In v5, NotifySegmentsReadyForArchive() still holds ArchNotifyLock\r\n> including .ready file creations. Since the notification loop doesn't\r\n> need the hash itself, the loop can be took out of the lock section?\r\n\r\nI think that works. This creates another opportunity for archive\r\nstatus files to be created out of order, but as discussed elsewhere, I\r\nthink we have to be prepared for that regardless. I moved the\r\nnotification loop out of the lock section in v6.\r\n\r\nNathan",
"msg_date": "Fri, 6 Aug 2021 18:26:12 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "So why do we call this structure \"record boundary map\", when the\nboundaries it refers to are segment boundaries? I think we should call\nit \"segment boundary map\" instead ... and also I think we should use\nthat name in the lock that protects it, so instead of ArchNotifyLock, it\ncould be SegmentBoundaryLock or perhaps WalSegmentBoundaryLock.\n\nThe reason for the latter is that I suspect the segment boundary map\nwill also have a use to fix the streaming replication issue I mentioned\nearlier in the thread. This also makes me think that we'll want the wal\nrecord *start* address to be in the hash table too, not just its *end*\naddress. So we'll use the start-1 as position to send, rather than the\nend-of-segment when GetFlushRecPtr() returns that.\n\nAs for 0xFFFFFFFFFFFFFFFF, I think it would be cleaner to do a\n#define MaxXLogSegNo with that value in the line immediately after\ntypedef XLogSegNo, rather than use the numerical value directly in the\nassignment.\n\nTypo in comment atop RemoveRecordBoundariesUpTo: it reads \"up to an\",\nshould read \"up to and\".\n\nI think the API of GetLatestRecordBoundarySegment would be better by\nreturning the boolean and having the segment as out argument. Then you\ncould do the caller more cleanly,\n\nif (GetLatestRecordBoundarySegment(last_notified, flushed, &latest_boundary_segment))\n{\n SetLastNotified( ... );\n RemoveRecordBoundaries( ... );\n LWLockRelease( ... );\n for (..)\n XLogArchiveNotifySeg(...);\n PgArchWakeup();\n}\nelse\n LWLockRelease(...);\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La virtud es el justo medio entre dos defectos\" (Aristóteles)\n\n\n",
"msg_date": "Mon, 16 Aug 2021 20:09:00 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Attached is a new version of the patch with all feedback addressed.\r\n\r\nOn 8/16/21, 5:09 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> The reason for the latter is that I suspect the segment boundary map\r\n> will also have a use to fix the streaming replication issue I mentioned\r\n> earlier in the thread. This also makes me think that we'll want the wal\r\n> record *start* address to be in the hash table too, not just its *end*\r\n> address. So we'll use the start-1 as position to send, rather than the\r\n> end-of-segment when GetFlushRecPtr() returns that.\r\n\r\nI've been thinking about the next steps for this one, too. ISTM we'll\r\nneed to basically assume that the flush pointer jumps along record\r\nboundaries except for the cross-segment records. I don't know if that\r\nis the safest assumption, but I think the alternative involves\r\nrecording every record boundary in the map.\r\n\r\nNathan",
"msg_date": "Tue, 17 Aug 2021 04:25:05 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-17, Bossart, Nathan wrote:\n\n> On 8/16/21, 5:09 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\n> > The reason for the latter is that I suspect the segment boundary map\n> > will also have a use to fix the streaming replication issue I mentioned\n> > earlier in the thread. This also makes me think that we'll want the wal\n> > record *start* address to be in the hash table too, not just its *end*\n> > address. So we'll use the start-1 as position to send, rather than the\n> > end-of-segment when GetFlushRecPtr() returns that.\n> \n> I've been thinking about the next steps for this one, too. ISTM we'll\n> need to basically assume that the flush pointer jumps along record\n> boundaries except for the cross-segment records. I don't know if that\n> is the safest assumption, but I think the alternative involves\n> recording every record boundary in the map.\n\nI'm not sure I understand your idea correctly. Perhaps another solution\nis to assume that the flush pointer jumps along record boundaries\n*including* for cross-segment records. The problem stems precisely from\nthe fact that we set the flush pointer at segment boundaries, even when\nthey aren't record boundary.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Tue, 17 Aug 2021 13:44:05 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/17/21, 10:44 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-17, Bossart, Nathan wrote:\r\n>> I've been thinking about the next steps for this one, too. ISTM we'll\r\n>> need to basically assume that the flush pointer jumps along record\r\n>> boundaries except for the cross-segment records. I don't know if that\r\n>> is the safest assumption, but I think the alternative involves\r\n>> recording every record boundary in the map.\r\n>\r\n> I'm not sure I understand your idea correctly. Perhaps another solution\r\n> is to assume that the flush pointer jumps along record boundaries\r\n> *including* for cross-segment records. The problem stems precisely from\r\n> the fact that we set the flush pointer at segment boundaries, even when\r\n> they aren't record boundary.\r\n\r\nI think we are in agreement. If we assume that the flush pointer\r\njumps along record boundaries and segment boundaries, the solution\r\nwould be to avoid using the flush pointer when it points to a segment\r\nboundary (given that the segment boundary is not also a record\r\nboundary). Instead, we'd only send up to the start position of the\r\nlast record in the segment to standbys.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 17 Aug 2021 18:06:23 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-17, Bossart, Nathan wrote:\n\n> I think we are in agreement. If we assume that the flush pointer\n> jumps along record boundaries and segment boundaries, the solution\n> would be to avoid using the flush pointer when it points to a segment\n> boundary (given that the segment boundary is not also a record\n> boundary). Instead, we'd only send up to the start position of the\n> last record in the segment to standbys.\n\nAgreed.\n\nAn implementation for that would be to test the flush pointer for it\nbeing a segment boundary, and in that case we (acquire segment boundary\nlock and) test for presence in the segment boundary map. If present,\nthen retreat the pointer to the record's start address.\n\nThis means that we acquire the segment boundary lock rarely. I was\nconcerned that we'd need to acquire it every time we read the flush\npointer, which would have been a disaster.\n\nThanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 17 Aug 2021 14:21:50 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "The thing I still don't understand about this patch is why we call\nRegisterSegmentBoundaryEntry and NotifySegmentsReadyForArchive in\nXLogInsertRecord.\n\nMy model concept of this would have these routines called only in\nXLogFlush / XLogWrite, which are conceptually about transferring data\nfrom in-memory WAL buffers into the on-disk WAL store (pg_xlog files).\nAs I understand, XLogInsertRecord is about copying bytes from the\nhigh-level operation (heap insert etc) into WAL buffers. So doing the\nregister/notify dance in both places should be redundant and\nunnecessary.\n\n\nIn the NotifySegmentsReadyForArchive() call at the bottom of XLogWrite,\nwe use flushed=InvalidXLogRecPtr. Why is that? Surely we can use\nLogwrtResult.Flush, just like in the other callsite there? (If we're\ncovering for somebody advancing FlushRecPtr concurrently, I think we\nadd a comment to explain that. But why do we need to do that in the\nfirst place?)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)\n\n\n",
"msg_date": "Tue, 17 Aug 2021 16:23:03 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/17/21, 1:24 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> The thing I still don't understand about this patch is why we call\r\n> RegisterSegmentBoundaryEntry and NotifySegmentsReadyForArchive in\r\n> XLogInsertRecord.\r\n>\r\n> My model concept of this would have these routines called only in\r\n> XLogFlush / XLogWrite, which are conceptually about transferring data\r\n> from in-memory WAL buffers into the on-disk WAL store (pg_xlog files).\r\n> As I understand, XLogInsertRecord is about copying bytes from the\r\n> high-level operation (heap insert etc) into WAL buffers. So doing the\r\n> register/notify dance in both places should be redundant and\r\n> unnecessary.\r\n\r\nThe main reason for registering the boundaries in XLogInsertRecord()\r\nis that it has the required information about the WAL record\r\nboundaries. I do not think that XLogWrite() has this information.\r\nIf we assumed that write requests always pointed to record boundaries,\r\nwe could probably just move the XLogArchiveNotifySeg() calls to the\r\nend of XLogWrite(), which is what my original patch [0] did.\r\n\r\n> In the NotifySegmentsReadyForArchive() call at the bottom of XLogWrite,\r\n> we use flushed=InvalidXLogRecPtr. Why is that? Surely we can use\r\n> LogwrtResult.Flush, just like in the other callsite there? (If we're\r\n> covering for somebody advancing FlushRecPtr concurrently, I think we\r\n> add a comment to explain that. But why do we need to do that in the\r\n> first place?)\r\n\r\nGood point. I did this in the new version of the patch.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com",
"msg_date": "Tue, 17 Aug 2021 21:09:52 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-17, Bossart, Nathan wrote:\n\n> The main reason for registering the boundaries in XLogInsertRecord()\n> is that it has the required information about the WAL record\n> boundaries. I do not think that XLogWrite() has this information.\n\nDoh, of course. So, why isn't it that we call Register in\nXLogInsertRecord, and Notify in XLogWrite?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n",
"msg_date": "Tue, 17 Aug 2021 17:11:46 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/17/21, 2:13 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-17, Bossart, Nathan wrote:\r\n>\r\n>> The main reason for registering the boundaries in XLogInsertRecord()\r\n>> is that it has the required information about the WAL record\r\n>> boundaries. I do not think that XLogWrite() has this information.\r\n>\r\n> Doh, of course. So, why isn't it that we call Register in\r\n> XLogInsertRecord, and Notify in XLogWrite?\r\n\r\nWe do. However, we also call NotifySegmentsReadyForArchive() in\r\nXLogInsertRecord() to handle the probably-unlikely scenario that the\r\nflush pointer has already advanced past the to-be-registered boundary.\r\nThis ensures that the .ready files are created as soon as possible.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 17 Aug 2021 21:44:57 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-17, Bossart, Nathan wrote:\n> On 8/17/21, 2:13 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\n>\n> > So, why isn't it that we call Register in XLogInsertRecord, and\n> > Notify in XLogWrite?\n> \n> We do. However, we also call NotifySegmentsReadyForArchive() in\n> XLogInsertRecord() to handle the probably-unlikely scenario that the\n> flush pointer has already advanced past the to-be-registered boundary.\n> This ensures that the .ready files are created as soon as possible.\n\nI see.\n\nI have two thoughts on that. First, why not do it outside the block\nthat tests for crossing a segment boundary? If that's a good thing to\ndo, then we should do it always.\n\nHowever, why do it in a WAL-producing client-connected backend? It\nstrikes me as a bad thing to do, because you are possibly causing delays\nfor client-connected backends. I suggest that we should give this task\nto the WAL writer process -- say, have XLogBackgroundFlush do it.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)\n\n\n",
"msg_date": "Tue, 17 Aug 2021 18:32:08 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-17, alvherre@alvh.no-ip.org wrote:\n\n> However, why do it in a WAL-producing client-connected backend? It\n> strikes me as a bad thing to do, because you are possibly causing delays\n> for client-connected backends. I suggest that we should give this task\n> to the WAL writer process -- say, have XLogBackgroundFlush do it.\n\nReading the comments on walwriter.c I am hesitant of having walwriter do\nit:\n\n> * Because the walwriter's cycle is directly linked to the maximum delay\n> * before async-commit transactions are guaranteed committed, it's probably\n> * unwise to load additional functionality onto it. For instance, if you've\n> * got a yen to create xlog segments further in advance, that'd be better done\n> * in bgwriter than in walwriter.\n\nSo that comment suggests that we should give the responsibility to bgwriter.\nThis seems good enough to me. I suppose if bgwriter has a long run of\nbuffers to write it could take a little bit of time (a few hundred\nmilliseconds?) but I think that should be okay.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"\n\n\n",
"msg_date": "Wed, 18 Aug 2021 13:04:11 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/18/21, 10:06 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> So that comment suggests that we should give the responsibility to bgwriter.\r\n> This seems good enough to me. I suppose if bgwriter has a long run of\r\n> buffers to write it could take a little bit of time (a few hundred\r\n> milliseconds?) but I think that should be okay.\r\n\r\nDo you think bgwriter should be the only caller of\r\nNotifySegmentsReadyForArchive(), or should we still have XLogWrite()\r\ncall it?\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 18 Aug 2021 17:24:38 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-18, Bossart, Nathan wrote:\n\n> On 8/18/21, 10:06 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\n> > So that comment suggests that we should give the responsibility to bgwriter.\n> > This seems good enough to me. I suppose if bgwriter has a long run of\n> > buffers to write it could take a little bit of time (a few hundred\n> > milliseconds?) but I think that should be okay.\n> \n> Do you think bgwriter should be the only caller of\n> NotifySegmentsReadyForArchive(), or should we still have XLogWrite()\n> call it?\n\nI think XLogWrite should absolutely be the primary caller. The one in\nbgwriter should be a backstop for the case you describe where the flush\npointer advanced past the registration point in XLogInsertRecord.\n\nI realize this means there's a contradiction with my previous argument,\nin that synchronous transaction commit calls XLogWrite at some point, so\nwe *are* putting the client-connected backend in charge of creating the\nnotify files. However, that only happens on transaction commit, where\nwe already accept responsibility for the WAL flush, not on each\nindividual XLOG record insert; also, the WAL writer will take care of it\nsometimes, for transactions that are long-enough lived.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n",
"msg_date": "Wed, 18 Aug 2021 13:40:03 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-18, alvherre@alvh.no-ip.org wrote:\n\n> I realize this means there's a contradiction with my previous argument,\n> in that synchronous transaction commit calls XLogWrite at some point, so\n> we *are* putting the client-connected backend in charge of creating the\n> notify files. However, that only happens on transaction commit, where\n> we already accept responsibility for the WAL flush, not on each\n> individual XLOG record insert; also, the WAL writer will take care of it\n> sometimes, for transactions that are long-enough lived.\n\nEh. I just said WAL writer will sometimes do it, and that's true\nbecause it'll occur in XLogBackgroundFlush. But upthread I wimped out\nof having WAL writer call NotifySegmentsReadyForArchive() and instead\nopined to give responsibility to bgwriter. However, thinking about it\nagain, maybe it does make sense to have walwriter do it too directly.\nThis causes no harm to walwriter's time constraints, since *it will have\nto do it via XLogBackgroundFlush anyway*.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 18 Aug 2021 13:45:23 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/18/21, 10:48 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-18, alvherre@alvh.no-ip.org wrote:\r\n>\r\n>> I realize this means there's a contradiction with my previous argument,\r\n>> in that synchronous transaction commit calls XLogWrite at some point, so\r\n>> we *are* putting the client-connected backend in charge of creating the\r\n>> notify files. However, that only happens on transaction commit, where\r\n>> we already accept responsibility for the WAL flush, not on each\r\n>> individual XLOG record insert; also, the WAL writer will take care of it\r\n>> sometimes, for transactions that are long-enough lived.\r\n>\r\n> Eh. I just said WAL writer will sometimes do it, and that's true\r\n> because it'll occur in XLogBackgroundFlush. But upthread I wimped out\r\n> of having WAL writer call NotifySegmentsReadyForArchive() and instead\r\n> opined to give responsibility to bgwriter. However, thinking about it\r\n> again, maybe it does make sense to have walwriter do it too directly.\r\n> This causes no harm to walwriter's time constraints, since *it will have\r\n> to do it via XLogBackgroundFlush anyway*.\r\n\r\nI'll add it after XLogBackgroundFlush(). I think we'll also want to\r\nset the WAL writer's latch in case it is hibernating.\r\n\r\nAnother approach could be to keep the NotifySegmentsReadyForArchive()\r\ncall in XLogInsertRecord(), but only call it if the flush pointer is\r\nbeyond the boundary we just registered. Or we could only set the\r\nlatch in XLogInsertRecord() if we detect that the flush pointer has\r\nadvanced.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 18 Aug 2021 18:02:31 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-18, Bossart, Nathan wrote:\n\n> I'll add it after XLogBackgroundFlush().\n\nI was wondering which would be better: before or after.\nXLogBackgroundFlush would do it anyway, so if you do it after then it's\nnot clear to me that it'd do anything (I mean we should not do any new\ncalls of NotifySegmentsReadyForArchive and just rely on the one in\nXLogBackgroundFlush -> XLogWrite).\n\nThe advantage of doing NotifySegmentsReadyForArchive before\nXLogBackgroundFlush is that the files would be created sooner, so the\narchiver can be working in parallel while walwriter does its other\nthing; then we'd reach the NotifySegmentsReadyForArchive in\nXLogBackgroundFlush and it'd find nothing to do most of the time, which\nis just fine.\n\n> I think we'll also want to set the WAL writer's latch in case it is\n> hibernating.\n\nYeah. (That's another advantage of doing it in walwriter rather than\nbgwriter: we don't publish bgwriter's latch anywhere AFAICS).\n\n> Another approach could be to keep the NotifySegmentsReadyForArchive()\n> call in XLogInsertRecord(), but only call it if the flush pointer is\n> beyond the boundary we just registered. Or we could only set the\n> latch in XLogInsertRecord() if we detect that the flush pointer has\n> advanced.\n\nHmm.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 18 Aug 2021 14:30:05 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/18/21, 11:31 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> I was wondering which would be better: before or after.\r\n> XLogBackgroundFlush would do it anyway, so if you do it after then it's\r\n> not clear to me that it'd do anything (I mean we should not do any new\r\n> calls of NotifySegmentsReadyForArchive and just rely on the one in\r\n> XLogBackgroundFlush -> XLogWrite).\r\n>\r\n> The advantage of doing NotifySegmentsReadyForArchive before\r\n> XLogBackgroundFlush is that the files would be created sooner, so the\r\n> archiver can be working in parallel while walwriter does its other\r\n> thing; then we'd reach the NotifySegmentsReadyForArchive in\r\n> XLogBackgroundFlush and it'd find nothing to do most of the time, which\r\n> is just fine.\r\n\r\nAs long as XLogBackgroundFlush() found work to do, it would take care\r\nof notifying, but I don't think we can depend on that. However, since\r\nwe're primarily using the WAL writer to take care of the case when the\r\nrecord has already been flushed, notifying beforehand seems fine to\r\nme. If XLogBackgroundFlush() does end up calling XLogWrite(), it'll\r\ncall it again, anyway.\r\n\r\nIn the attached patch, I modified XLogInsertRecord() to simply set the\r\nlatch if we detect that flushRecPtr has advanced.\r\n\r\nNathan",
"msg_date": "Wed, 18 Aug 2021 19:07:18 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-18, Bossart, Nathan wrote:\n\n> As long as XLogBackgroundFlush() found work to do, it would take care\n> of notifying, but I don't think we can depend on that. However, since\n> we're primarily using the WAL writer to take care of the case when the\n> record has already been flushed, notifying beforehand seems fine to\n> me. If XLogBackgroundFlush() does end up calling XLogWrite(), it'll\n> call it again, anyway.\n\nAgreed.\n\n> In the attached patch, I modified XLogInsertRecord() to simply set the\n> latch if we detect that flushRecPtr has advanced.\n\nRight, that's what I was thinking. I modified that slightly to use\nLogwrtResult.Flush (which should be fresh enough) instead of calling\nGetFlushRecPtr again, which saves a bit. I also changed it to > instead\nof >=, because if I understand you correctly we only care to notify if\nthe flush pointer advanced, not in the case it stayed the same.\n\nI made a few other cosmetic tweaks -- added comment to\nSegmentBoundaryEntry and renamed the 'pos' to 'endpos'; renamed argument\n'notify' of XLogArchiveNotify to 'nudge' (because having two different\n\"notify\" concepts in that function seemed confusing).\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html",
"msg_date": "Wed, 18 Aug 2021 19:46:44 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/18/21, 4:47 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-18, Bossart, Nathan wrote:\r\n>> In the attached patch, I modified XLogInsertRecord() to simply set the\r\n>> latch if we detect that flushRecPtr has advanced.\r\n>\r\n> Right, that's what I was thinking. I modified that slightly to use\r\n> LogwrtResult.Flush (which should be fresh enough) instead of calling\r\n> GetFlushRecPtr again, which saves a bit. I also changed it to > instead\r\n> of >=, because if I understand you correctly we only care to notify if\r\n> the flush pointer advanced, not in the case it stayed the same.\r\n\r\nMy thinking was that we needed to read flushRecPtr after registering\r\nthe boundary in case it advanced just before registration. And I used\r\n>= because if flushRecPtr points to the end of the record, we should\r\nbe able to create the .ready file for the segment.\r\n\r\nWe can avoid acquiring the spinlock an extra time if we move the first\r\npart of the cross-segment logic to before we update the local copy of\r\nLogwrtResult. I attached a new version of the patch that does this.\r\n\r\nThe rest looks good to me.\r\n\r\nNathan",
"msg_date": "Thu, 19 Aug 2021 02:46:19 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "In v12 I moved the code around a bit and reworded some comments.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 20 Aug 2021 10:33:56 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Two things.\n\n1. We use a hash table in shared memory. That's great. The part that's\n not so great is that in both places where we read items from it, we\n have to iterate in some way. This seems a bit silly. An array would\n serve us better, if only we could expand it as needed. However, in\n shared memory we can't do that. (I think the list of elements we\n need to memoize is arbitrary long, if enough processes can be writing\n WAL at the same time.)\n\n Now that I think about this again, maybe it's limited by\n NUM_XLOGINSERT_LOCKS, since there can only be that many records being\n written down at a time ...\n\n2. There is a new LWLock acquisition that may be a new contention point.\n We acquire the lock in these cases:\n - WAL record insert, when a segment boundary is crosses (rare\n enough).\n - XLogWrite, when a segment needs to be notified.\n\n Looking again, I think the last point might be a problem actually,\n because XLogWrite is called with WALWriteLock held. Maybe we should\n take the NotifySegmentsReadyForArchive() call outside the section\n locked by WALWriteLock (so put it in XLogWrite callers instead of\n XLogWrite itself).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar\nal alfarero y ver qué formas se pueden sacar del otro\" (C. Halloway en\nLa Feria de las Tinieblas, R. Bradbury)\n\n\n",
"msg_date": "Fri, 20 Aug 2021 10:50:18 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 10:50 AM alvherre@alvh.no-ip.org\n<alvherre@alvh.no-ip.org> wrote:\n> 1. We use a hash table in shared memory. That's great. The part that's\n> not so great is that in both places where we read items from it, we\n> have to iterate in some way. This seems a bit silly. An array would\n> serve us better, if only we could expand it as needed. However, in\n> shared memory we can't do that. (I think the list of elements we\n> need to memoize is arbitrary long, if enough processes can be writing\n> WAL at the same time.)\n\nWe can't expand the hash table either. It has an initial and maximum\nsize of 16 elements, which means it's basically an expensive array,\nand which also means that it imposes a new limit of 16 *\nwal_segment_size on the size of WAL records. If you exceed that limit,\nI think things just go boom... which I think is not acceptable. I\nthink we can have records in the multi-GB range of wal_level=logical\nand someone chooses a stupid replica identity setting.\n\nIt's actually not clear to me why we need to track multiple entries\nanyway. The scenario postulated by Horiguchi-san in\nhttps://www.postgresql.org/message-id/20201014.090628.839639906081252194.horikyota.ntt@gmail.com\nseems to require that the write position be multiple segments ahead of\nthe flush position, but that seems impossible with the present code,\nbecause XLogWrite() calls issue_xlog_fsync() at once if the segment is\nfilled. So I think, at least with the present code, any record that\nisn't completely flushed to disk has to be at least partially in the\ncurrent segment. And there can be only one record that starts in some\nearlier segment and ends in this one.\n\nI will be the first to admit that the forced end-of-segment syncs\nsuck. They often stall every backend in the entire system at the same\ntime. Everyone fills up the xlog segment really fast and then stalls\nHARD while waiting for that sync to happen. So it's arguably better\nnot to do more things that depend on that being how it works, but I\nthink needing a variable-size amount of shared memory is even worse.\nIf we're going to track multiple entries here we need some rule that\nbounds how many of them we can need to track. If the number of entries\nis defined by the number of segment boundaries that a particular\nrecord crosses, it's effectively unbounded, because right now WAL\nrecords can be pretty much arbitrarily big.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Aug 2021 11:27:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/20/21, 8:29 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Fri, Aug 20, 2021 at 10:50 AM alvherre@alvh.no-ip.org\r\n> <alvherre@alvh.no-ip.org> wrote:\r\n>> 1. We use a hash table in shared memory. That's great. The part that's\r\n>> not so great is that in both places where we read items from it, we\r\n>> have to iterate in some way. This seems a bit silly. An array would\r\n>> serve us better, if only we could expand it as needed. However, in\r\n>> shared memory we can't do that. (I think the list of elements we\r\n>> need to memoize is arbitrary long, if enough processes can be writing\r\n>> WAL at the same time.)\r\n>\r\n> We can't expand the hash table either. It has an initial and maximum\r\n> size of 16 elements, which means it's basically an expensive array,\r\n> and which also means that it imposes a new limit of 16 *\r\n> wal_segment_size on the size of WAL records. If you exceed that limit,\r\n> I think things just go boom... which I think is not acceptable. I\r\n> think we can have records in the multi-GB range of wal_level=logical\r\n> and someone chooses a stupid replica identity setting.\r\n\r\nIf a record spans multiple segments, we only register one segment\r\nboundary. For example, if I insert a record that starts at segment\r\nnumber 1 and stops at 10, I'll insert one segment boundary for segment\r\n10. We'll only create .ready files for segments 1 through 9 once this\r\nrecord is completely flushed to disk.\r\n\r\nI was under the impression that shared hash tables could be expanded\r\nas necessary, but from your note and the following comment, that does\r\nnot seem to be true:\r\n\r\n* Note: for a shared-memory hashtable, nelem needs to be a pretty good\r\n* estimate, since we can't expand the table on the fly. But an unshared\r\n* hashtable can be expanded on-the-fly, so it's better for nelem to be\r\n* on the small side and let the table grow if it's exceeded. An overly\r\n* large nelem will penalize hash_seq_search speed without buying much.\r\n\r\n> It's actually not clear to me why we need to track multiple entries\r\n> anyway. The scenario postulated by Horiguchi-san in\r\n> https://www.postgresql.org/message-id/20201014.090628.839639906081252194.horikyota.ntt@gmail.com\r\n> seems to require that the write position be multiple segments ahead of\r\n> the flush position, but that seems impossible with the present code,\r\n> because XLogWrite() calls issue_xlog_fsync() at once if the segment is\r\n> filled. So I think, at least with the present code, any record that\r\n> isn't completely flushed to disk has to be at least partially in the\r\n> current segment. And there can be only one record that starts in some\r\n> earlier segment and ends in this one.\r\n\r\nWe register the boundaries XLogInsertRecord(), which AFAICT just bumps\r\nthe global write request pointer ahead, so I'm not sure we can make\r\nany assumptions about what is written/flushed at that time. (I see\r\nthat we do end up calling XLogFlush() for XLOG_SWITCH records in\r\nXLogInsertRecord(), but I don't see any other cases where we actually\r\nwrite anything in this function.) Am I missing something?\r\n\r\n> I will be the first to admit that the forced end-of-segment syncs\r\n> suck. They often stall every backend in the entire system at the same\r\n> time. Everyone fills up the xlog segment really fast and then stalls\r\n> HARD while waiting for that sync to happen. So it's arguably better\r\n> not to do more things that depend on that being how it works, but I\r\n> think needing a variable-size amount of shared memory is even worse.\r\n> If we're going to track multiple entries here we need some rule that\r\n> bounds how many of them we can need to track. If the number of entries\r\n> is defined by the number of segment boundaries that a particular\r\n> record crosses, it's effectively unbounded, because right now WAL\r\n> records can be pretty much arbitrarily big.\r\n\r\nIf there isn't a way to ensure that the number of entries we need to\r\nstore is bounded, I'm tempted to propose my original patch [0], which\r\njust moves .ready file creation to the very end of XLogWrite(). It's\r\nprobably not a complete solution, but it might be better than what's\r\nthere today.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com\r\n\r\n",
"msg_date": "Fri, 20 Aug 2021 16:35:59 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 12:36 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> If a record spans multiple segments, we only register one segment\n> boundary. For example, if I insert a record that starts at segment\n> number 1 and stops at 10, I'll insert one segment boundary for segment\n> 10. We'll only create .ready files for segments 1 through 9 once this\n> record is completely flushed to disk.\n\nOh ... OK. So is there any experimental scenario in which the hash\ntable ends up with more than 1 entry? And if so, how does that happen?\n\n> > It's actually not clear to me why we need to track multiple entries\n> > anyway. The scenario postulated by Horiguchi-san in\n> > https://www.postgresql.org/message-id/20201014.090628.839639906081252194.horikyota.ntt@gmail.com\n> > seems to require that the write position be multiple segments ahead of\n> > the flush position, but that seems impossible with the present code,\n> > because XLogWrite() calls issue_xlog_fsync() at once if the segment is\n> > filled. So I think, at least with the present code, any record that\n> > isn't completely flushed to disk has to be at least partially in the\n> > current segment. And there can be only one record that starts in some\n> > earlier segment and ends in this one.\n>\n> We register the boundaries XLogInsertRecord(), which AFAICT just bumps\n> the global write request pointer ahead, so I'm not sure we can make\n> any assumptions about what is written/flushed at that time. (I see\n> that we do end up calling XLogFlush() for XLOG_SWITCH records in\n> XLogInsertRecord(), but I don't see any other cases where we actually\n> write anything in this function.) Am I missing something?\n\nWell, I'm not sure. But I *think* that the code as it exists today is\nsmart enough not to try to archive a segment that hasn't been\ncompletely flushed, and the gap is only that even though the segment\nmight be completely flushed, some portion of the record that is part\nof a later segment might not be flushed, and thus after a crash we\nmight overwrite the already-flushed contents. The patch can make an\nimplementation choice to do some work at XLogInsertRecord() time if it\nlikes, but there's no real hazard at that point. The hazard only\nexists, or so I think, once a segment that contains part of the record\nis fully on disk. But that means, if my previous logic is correct,\nthat the hazard can only exist for at most 1 record at any point in\ntime.\n\n> If there isn't a way to ensure that the number of entries we need to\n> store is bounded, I'm tempted to propose my original patch [0], which\n> just moves .ready file creation to the very end of XLogWrite(). It's\n> probably not a complete solution, but it might be better than what's\n> there today.\n\nDoesn't that allocate memory inside a critical section? I would have\nthought it would cause an immediate assertion failure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Aug 2021 13:04:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/20/21, 10:08 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Fri, Aug 20, 2021 at 12:36 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> If a record spans multiple segments, we only register one segment\r\n>> boundary. For example, if I insert a record that starts at segment\r\n>> number 1 and stops at 10, I'll insert one segment boundary for segment\r\n>> 10. We'll only create .ready files for segments 1 through 9 once this\r\n>> record is completely flushed to disk.\r\n>\r\n> Oh ... OK. So is there any experimental scenario in which the hash\r\n> table ends up with more than 1 entry? And if so, how does that happen?\r\n\r\nI was able to do this by turning synchronous_commit off, increasing\r\nwal_buffers substantially, and adding sleeps to XLogWrite().\r\n\r\n>> If there isn't a way to ensure that the number of entries we need to\r\n>> store is bounded, I'm tempted to propose my original patch [0], which\r\n>> just moves .ready file creation to the very end of XLogWrite(). It's\r\n>> probably not a complete solution, but it might be better than what's\r\n>> there today.\r\n>\r\n> Doesn't that allocate memory inside a critical section? I would have\r\n> thought it would cause an immediate assertion failure.\r\n\r\nI could probably replace the list with two local variables (start and\r\nend segments).\r\n\r\nThinking about this stuff further, I was wondering if one way to\r\nhandle the bounded shared hash table problem would be to replace the\r\nlatest boundary in the map whenever it was full. But at that point,\r\ndo we even need a hash table? This led me to revisit the two-element\r\napproach that was discussed upthread. What if we only stored the\r\nearliest and latest segment boundaries at any given time? Once the\r\nearliest boundary is added, it never changes until the segment is\r\nflushed and it is removed. The latest boundary, however, will be\r\nupdated any time we register another segment. Once the earliest\r\nboundary is removed, we replace it with the latest boundary. This\r\nstrategy could cause us to miss intermediate boundaries, but AFAICT\r\nthe worst case scenario is that we hold off creating .ready files a\r\nbit longer than necessary.\r\n\r\nI'll work on a patch to illustrate what I'm thinking.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 20 Aug 2021 17:29:22 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-20, Bossart, Nathan wrote:\n\n> On 8/20/21, 8:29 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n\n> > We can't expand the hash table either. It has an initial and maximum\n> > size of 16 elements, which means it's basically an expensive array,\n> > and which also means that it imposes a new limit of 16 *\n> > wal_segment_size on the size of WAL records. If you exceed that limit,\n> > I think things just go boom... which I think is not acceptable. I\n> > think we can have records in the multi-GB range of wal_level=logical\n> > and someone chooses a stupid replica identity setting.\n> \n> I was under the impression that shared hash tables could be expanded\n> as necessary, but from your note and the following comment, that does\n> not seem to be true:\n\nActually, you were right. Hash tables in shared memory can be expanded.\nThere are some limitations (the hash \"directory\" is fixed size, which\nmeans the hash table get less efficient if it grows too much), but you\ncan definitely create more hash entries than the initial size. See for\nexample element_alloc(), which covers the case of a hash table being\nIS_PARTITIONED -- something that only shmem hash tables can be. Note\nthat ShmemInitHash passes the HASH_ALLOC flag and uses ShmemAllocNoError\nas allocation function, which acquires memory from the shared segment.\n\nThis is a minor thing -- it doesn't affect the fact that the hash table\nis possibly being misused and inefficient -- but I thought it was worth\npointing out.\n\n\nAs an example, consider the LOCK / PROCLOCK hash tables. These can\ncontain more elements than max_backends * max_locks_per_transaction.\nThose elements consume shared memory from the \"allocation slop\" in the\nshared memory segment. It's tough when it happens (as far as I know the\nmemory is never \"returned\" once such a hash table grows to use that\nspace), but it does work.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 20 Aug 2021 13:52:10 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 1:29 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Thinking about this stuff further, I was wondering if one way to\n> handle the bounded shared hash table problem would be to replace the\n> latest boundary in the map whenever it was full. But at that point,\n> do we even need a hash table? This led me to revisit the two-element\n> approach that was discussed upthread. What if we only stored the\n> earliest and latest segment boundaries at any given time? Once the\n> earliest boundary is added, it never changes until the segment is\n> flushed and it is removed. The latest boundary, however, will be\n> updated any time we register another segment. Once the earliest\n> boundary is removed, we replace it with the latest boundary. This\n> strategy could cause us to miss intermediate boundaries, but AFAICT\n> the worst case scenario is that we hold off creating .ready files a\n> bit longer than necessary.\n\nI think this is a promising approach. We could also have a small\nfixed-size array, so that we only have to risk losing track of\nanything when we overflow the array. But I guess I'm still unconvinced\nthat there's a real possibility of genuinely needing multiple\nelements. Suppose we are thinking of adding a second element to the\narray (or the hash table). I feel like it's got to be safe to just\nremove the first one. If not, then apparently the WAL record that\ncaused us to make the first entry isn't totally flushed yet - which I\nstill think is impossible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Aug 2021 14:19:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 1:52 PM alvherre@alvh.no-ip.org\n<alvherre@alvh.no-ip.org> wrote:\n> Actually, you were right. Hash tables in shared memory can be expanded.\n> There are some limitations (the hash \"directory\" is fixed size, which\n> means the hash table get less efficient if it grows too much), but you\n> can definitely create more hash entries than the initial size. See for\n> example element_alloc(), which covers the case of a hash table being\n> IS_PARTITIONED -- something that only shmem hash tables can be. Note\n> that ShmemInitHash passes the HASH_ALLOC flag and uses ShmemAllocNoError\n> as allocation function, which acquires memory from the shared segment.\n\nI realize that the code supports this ... but I thought we had\nestablished a policy that only the main lock manager's shared hash\ntables, and not any others, are actually allowed to make use of this\nfunctionality. See commit 7c797e7194d969f974abf579cacf30ffdccdbb95.\n\nIt seems like a dangerous thing to rely on in any case, since we can't\npredict how much extra shared memory might actually be available.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Aug 2021 14:40:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/20/21, 11:20 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Fri, Aug 20, 2021 at 1:29 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Thinking about this stuff further, I was wondering if one way to\r\n>> handle the bounded shared hash table problem would be to replace the\r\n>> latest boundary in the map whenever it was full. But at that point,\r\n>> do we even need a hash table? This led me to revisit the two-element\r\n>> approach that was discussed upthread. What if we only stored the\r\n>> earliest and latest segment boundaries at any given time? Once the\r\n>> earliest boundary is added, it never changes until the segment is\r\n>> flushed and it is removed. The latest boundary, however, will be\r\n>> updated any time we register another segment. Once the earliest\r\n>> boundary is removed, we replace it with the latest boundary. This\r\n>> strategy could cause us to miss intermediate boundaries, but AFAICT\r\n>> the worst case scenario is that we hold off creating .ready files a\r\n>> bit longer than necessary.\r\n>\r\n> I think this is a promising approach. We could also have a small\r\n> fixed-size array, so that we only have to risk losing track of\r\n> anything when we overflow the array. But I guess I'm still unconvinced\r\n> that there's a real possibility of genuinely needing multiple\r\n> elements. Suppose we are thinking of adding a second element to the\r\n> array (or the hash table). I feel like it's got to be safe to just\r\n> remove the first one. If not, then apparently the WAL record that\r\n> caused us to make the first entry isn't totally flushed yet - which I\r\n> still think is impossible.\r\n\r\nI've attached a patch to demonstrate what I'm thinking.\r\n\r\nNathan",
"msg_date": "Fri, 20 Aug 2021 19:41:28 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-20, Bossart, Nathan wrote:\n\n> > On Fri, Aug 20, 2021 at 1:29 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> >> This led me to revisit the two-element\n> >> approach that was discussed upthread. What if we only stored the\n> >> earliest and latest segment boundaries at any given time? Once the\n> >> earliest boundary is added, it never changes until the segment is\n> >> flushed and it is removed. The latest boundary, however, will be\n> >> updated any time we register another segment. Once the earliest\n> >> boundary is removed, we replace it with the latest boundary. This\n> >> strategy could cause us to miss intermediate boundaries, but AFAICT\n> >> the worst case scenario is that we hold off creating .ready files a\n> >> bit longer than necessary.\n\n> I've attached a patch to demonstrate what I'm thinking.\n\nThere is only one thing I didn't like in this new version, which is that\nwe're holding info_lck too much. I've seen info_lck contention be a\nproblem in some workloads and I'd rather not add more stuff to it. I'd\nrather we stick with using a new lock object to protect all the data we\nneed for this job.\n\nShould this new lock object be a spinlock or an lwlock? I think a\nspinlock would generally be better because it's lower overhead and we\ncan't use it in shared mode anywhere, which would be the greatest\nargument for an lwlock. However, I think we avoid letting code run with\nspinlocks held that's not straight-line code, and we have some function\ncalls there.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 20 Aug 2021 17:38:16 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Attached is v14 which uses a separate spinlock.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)",
"msg_date": "Fri, 20 Aug 2021 19:00:14 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/20/21, 4:00 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> Attached is v14 which uses a separate spinlock.\r\n\r\nThis looks good to me.\r\n\r\nI was looking at moving the function calls out of the spinlock region.\r\nI don't think the functions are doing anything too expensive, and they\r\nhelp clean up NotifySegmentsReadyForArchive() quite a bit, but I\r\nunderstand why it might be against project policy to do something like\r\nthat. It would be easy enough to get rid of the helper functions if\r\nthat was concern.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 20 Aug 2021 23:13:11 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-20, Bossart, Nathan wrote:\n\n> I was looking at moving the function calls out of the spinlock region.\n> I don't think the functions are doing anything too expensive, and they\n> help clean up NotifySegmentsReadyForArchive() quite a bit, but I\n> understand why it might be against project policy to do something like\n> that. It would be easy enough to get rid of the helper functions if\n> that was concern.\n\nWell, the thing I realized is that these three helper functions have\nexactly one caller each. I think the compiler is going to inline them,\nso there isn't going to be a function call in the assembly. I haven't\nverified this, though.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los dioses no protegen a los insensatos. Éstos reciben protección de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n",
"msg_date": "Fri, 20 Aug 2021 19:52:33 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/20/21, 4:52 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-20, Bossart, Nathan wrote:\r\n>\r\n>> I was looking at moving the function calls out of the spinlock region.\r\n>> I don't think the functions are doing anything too expensive, and they\r\n>> help clean up NotifySegmentsReadyForArchive() quite a bit, but I\r\n>> understand why it might be against project policy to do something like\r\n>> that. It would be easy enough to get rid of the helper functions if\r\n>> that was concern.\r\n>\r\n> Well, the thing I realized is that these three helper functions have\r\n> exactly one caller each. I think the compiler is going to inline them,\r\n> so there isn't going to be a function call in the assembly. I haven't\r\n> verified this, though.\r\n\r\nGood point. It looks like they're getting inlined for me.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 21 Aug 2021 03:38:18 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-21, Bossart, Nathan wrote:\n\n> > Well, the thing I realized is that these three helper functions have\n> > exactly one caller each. I think the compiler is going to inline them,\n> > so there isn't going to be a function call in the assembly. I haven't\n> > verified this, though.\n> \n> Good point. It looks like they're getting inlined for me.\n\nI still didn't like it, because it looks like we're creating an API for\nwhich there can be only one caller. So I expanded the functions in the\ncaller. It doesn't look too bad. However ...\n\n... while reading the resulting code after backpatching to all branches,\nI realized that if there are no registrations whatsoever, then archiving\nwon't do anything, which surely is the wrong thing to do. The correct\nbehavior should be \"if there are no registrations, then *all* flushed\nsegments can be notified\".\n\nI'll fix that ...\n\nAnother thing I didn't like is that you used a name ending in RecPtr for\nthe LSN, which gives no indication that it really is the *end* LSN, not\nthe start pointer. And it won't play nice with the need to add the\n*start* LSN which we'll need to implement solving the equivalent problem\nfor streaming replication. I'll rename those to\nearliestSegBoundaryEndPtr and latestSegBoundaryEndPtr.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.",
"msg_date": "Mon, 23 Aug 2021 11:49:27 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/23/21, 8:50 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> ... while reading the resulting code after backpatching to all branches,\r\n> I realized that if there are no registrations whatsoever, then archiving\r\n> won't do anything, which surely is the wrong thing to do. The correct\r\n> behavior should be \"if there are no registrations, then *all* flushed\r\n> segments can be notified\".\r\n\r\nHm. My expectation would be that if there are no registrations, we\r\ncannot create .ready files for the flushed segments. The scenario\r\nwhere I can see that happening is when a record gets flushed to disk\r\nprior to registration. In that case, we'll still eventually register\r\nthe record and wake up the WAL writer process, which will take care of\r\ncreating the .ready files that were missed earlier. Is there another\r\ncase you are thinking of where we could miss registration for a cross-\r\nsegment record altogether?\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 23 Aug 2021 15:57:13 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-23, Bossart, Nathan wrote:\n\n> Hm. My expectation would be that if there are no registrations, we\n> cannot create .ready files for the flushed segments. The scenario\n> where I can see that happening is when a record gets flushed to disk\n> prior to registration. In that case, we'll still eventually register\n> the record and wake up the WAL writer process, which will take care of\n> creating the .ready files that were missed earlier. Is there another\n> case you are thinking of where we could miss registration for a cross-\n> segment record altogether?\n\nI'm thinking of the case where no record cross segment boundaries ever.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 23 Aug 2021 12:31:48 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/23/21, 9:33 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-23, Bossart, Nathan wrote:\r\n>\r\n>> Hm. My expectation would be that if there are no registrations, we\r\n>> cannot create .ready files for the flushed segments. The scenario\r\n>> where I can see that happening is when a record gets flushed to disk\r\n>> prior to registration. In that case, we'll still eventually register\r\n>> the record and wake up the WAL writer process, which will take care of\r\n>> creating the .ready files that were missed earlier. Is there another\r\n>> case you are thinking of where we could miss registration for a cross-\r\n>> segment record altogether?\r\n>\r\n> I'm thinking of the case where no record cross segment boundaries ever.\r\n\r\nSorry, I'm still not following this one. If we skipped creating\r\n.ready segments due to a crash, we rely on RemoveOldXlogFiles() to\r\ncreate them as needed in the end-of-recovery checkpoint. If a record\r\nfits perfectly in the end of a segment, we'll still register it as a\r\nboundary for the next segment (hence why we use XLByteToSeg() instead\r\nof XLByteToPrevSeg()). If database activity stops completely, there\r\nshouldn't be anything to mark ready.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 23 Aug 2021 16:48:24 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-23, Bossart, Nathan wrote:\n\n> Sorry, I'm still not following this one. If we skipped creating\n> .ready segments due to a crash, we rely on RemoveOldXlogFiles() to\n> create them as needed in the end-of-recovery checkpoint. If a record\n> fits perfectly in the end of a segment, we'll still register it as a\n> boundary for the next segment (hence why we use XLByteToSeg() instead\n> of XLByteToPrevSeg()). If database activity stops completely, there\n> shouldn't be anything to mark ready.\n\nThe only way .ready files are created is that XLogNotifyWrite() is\ncalled. For regular WAL files during regular operation, that only\nhappens in XLogNotifyWriteSeg(). That, in turn, only happens in\nNotifySegmentsReadyForArchive(). But if the system runs and never\nwrites WAL records that cross WAL boundaries, that function will see\nthat both earliestSegBoundary and latestSegBoundary are MaxXLogSegno,\nand return without doing anything. So no segments will be notified.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 23 Aug 2021 13:14:09 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-23, alvherre@alvh.no-ip.org wrote:\n\n> The only way .ready files are created is that XLogNotifyWrite() is\n> called. For regular WAL files during regular operation, that only\n> happens in XLogNotifyWriteSeg(). That, in turn, only happens in\n> NotifySegmentsReadyForArchive(). But if the system runs and never\n> writes WAL records that cross WAL boundaries, that function will see\n> that both earliestSegBoundary and latestSegBoundary are MaxXLogSegno,\n> and return without doing anything. So no segments will be notified.\n\nNevermind -- I realized that all segments get registered, not just those\nfor which we generate continuation records.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"\n\n\n",
"msg_date": "Mon, 23 Aug 2021 13:30:12 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/23/21, 10:31 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-23, alvherre@alvh.no-ip.org wrote:\r\n>\r\n>> The only way .ready files are created is that XLogNotifyWrite() is\r\n>> called. For regular WAL files during regular operation, that only\r\n>> happens in XLogNotifyWriteSeg(). That, in turn, only happens in\r\n>> NotifySegmentsReadyForArchive(). But if the system runs and never\r\n>> writes WAL records that cross WAL boundaries, that function will see\r\n>> that both earliestSegBoundary and latestSegBoundary are MaxXLogSegno,\r\n>> and return without doing anything. So no segments will be notified.\r\n>\r\n> Nevermind -- I realized that all segments get registered, not just those\r\n> for which we generate continuation records.\r\n\r\nAh, okay. BTW the other changes you mentioned made sense to me.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 23 Aug 2021 17:38:16 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-23, Bossart, Nathan wrote:\n\n> Ah, okay. BTW the other changes you mentioned made sense to me.\n\nThanks. I've pushed this now to all live branches.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n",
"msg_date": "Mon, 23 Aug 2021 15:55:03 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/23/21, 12:55 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> Thanks. I've pushed this now to all live branches.\r\n\r\nThank you! I appreciate the thorough reviews. Should we make a new\r\nthread for the streaming replication fix?\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 23 Aug 2021 20:23:26 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-23, Bossart, Nathan wrote:\n\n> On 8/23/21, 12:55 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\n> > Thanks. I've pushed this now to all live branches.\n> \n> Thank you! I appreciate the thorough reviews. Should we make a new\n> thread for the streaming replication fix?\n\nYeah, this one is long enough :-)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 23 Aug 2021 16:39:15 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "\n\nOn 2021/08/24 4:55, alvherre@alvh.no-ip.org wrote:\n> On 2021-Aug-23, Bossart, Nathan wrote:\n> \n>> Ah, okay. BTW the other changes you mentioned made sense to me.\n> \n> Thanks. I've pushed this now to all live branches.\n\nThanks a lot!\n\n+\t\t/*\n+\t\t * There's a chance that the record was already flushed to disk and we\n+\t\t * missed marking segments as ready for archive. If this happens, we\n+\t\t * nudge the WALWriter, which will take care of notifying segments as\n+\t\t * needed.\n+\t\t */\n+\t\tif (StartSeg != EndSeg && XLogArchivingActive() &&\n+\t\t\tLogwrtResult.Flush >= EndPos && ProcGlobal->walwriterLatch)\n+\t\t\tSetLatch(ProcGlobal->walwriterLatch);\n\nIs this really necessary?\n\nIf LogwrtResult.Flush >= EndPos, which means that another process already\nhas flushed the record concurrently and updated XLogCtl->LogwrtResult.Flush.\nThis situation also means that that another process called\nNotifySegmentsReadyForArchive(LogwrtResult.Flush). Right?\n\nIf this understanding is right, there seems no need to wake walwriter up here\nso that it can call NotifySegmentsReadyForArchive(LogwrtResult.Flush) gain.\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 26 Aug 2021 03:00:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/25/21, 11:01 AM, \"Fujii Masao\" <masao.fujii@oss.nttdata.com> wrote:\r\n> If LogwrtResult.Flush >= EndPos, which means that another process already\r\n> has flushed the record concurrently and updated XLogCtl->LogwrtResult.Flush.\r\n> This situation also means that that another process called\r\n> NotifySegmentsReadyForArchive(LogwrtResult.Flush). Right?\r\n\r\nIf the segment boundary wasn't registered before the other process\r\ncalled NotifySegmentsReadyForArchive(), then it couldn't have used the\r\nboundary for deciding which .ready files to create.\r\n\r\n> If this understanding is right, there seems no need to wake walwriter up here\r\n> so that it can call NotifySegmentsReadyForArchive(LogwrtResult.Flush) gain.\r\n> Thought?\r\n\r\nWe're actually discussing this right now in another thread [0]. I\r\nthink we might be able to get rid of that part if we move the boundary\r\nregistration to before we release the WAL insert lock(s).\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/DE60B9AA-9670-47DA-9678-6C79BCD884E3%40amazon.com\r\n\r\n",
"msg_date": "Wed, 25 Aug 2021 18:30:10 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-23 15:55:03 -0400, alvherre@alvh.no-ip.org wrote:\n> On 2021-Aug-23, Bossart, Nathan wrote:\n> \n> > Ah, okay. BTW the other changes you mentioned made sense to me.\n> \n> Thanks. I've pushed this now to all live branches.\n\nWhile rebasing the aio patchset ontop of HEAD I noticed that this commit added\nanother atomic operation to XLogWrite() with archiving enabled. The WAL write\npath is really quite hot, and at least some of the\nNotifySegmentsReadyForArchive() calls are done while WALWriteLock is held.\n\nI think we should at least try to make the fast-path where no segment\nboundaries were crossed use no atomic operations.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 28 Aug 2021 19:37:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-28, Andres Freund wrote:\n\n> While rebasing the aio patchset ontop of HEAD I noticed that this commit added\n> another atomic operation to XLogWrite() with archiving enabled. The WAL write\n> path is really quite hot, and at least some of the\n> NotifySegmentsReadyForArchive() calls are done while WALWriteLock is held.\n> \n> I think we should at least try to make the fast-path where no segment\n> boundaries were crossed use no atomic operations.\n\nI think the best way to achieve this is is to rely completely on\nwalwriter doing the segment notification, so that the WAL write done by\nbackend would only do a latch set.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 30 Aug 2021 15:51:54 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/30/21, 12:52 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-28, Andres Freund wrote:\r\n>\r\n>> While rebasing the aio patchset ontop of HEAD I noticed that this commit added\r\n>> another atomic operation to XLogWrite() with archiving enabled. The WAL write\r\n>> path is really quite hot, and at least some of the\r\n>> NotifySegmentsReadyForArchive() calls are done while WALWriteLock is held.\r\n>>\r\n>> I think we should at least try to make the fast-path where no segment\r\n>> boundaries were crossed use no atomic operations.\r\n>\r\n> I think the best way to achieve this is is to rely completely on\r\n> walwriter doing the segment notification, so that the WAL write done by\r\n> backend would only do a latch set.\r\n\r\n+1. If we do that, we may also want to move\r\nNotifySegmentsReadyForArchive() to after the call to\r\nXLogBackgroundFlush() in the WAL writer.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 30 Aug 2021 19:59:48 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-30 15:51:54 -0400, alvherre@alvh.no-ip.org wrote:\n> On 2021-Aug-28, Andres Freund wrote:\n>\n> > While rebasing the aio patchset ontop of HEAD I noticed that this commit added\n> > another atomic operation to XLogWrite() with archiving enabled. The WAL write\n> > path is really quite hot, and at least some of the\n> > NotifySegmentsReadyForArchive() calls are done while WALWriteLock is held.\n> >\n> > I think we should at least try to make the fast-path where no segment\n> > boundaries were crossed use no atomic operations.\n>\n> I think the best way to achieve this is is to rely completely on\n> walwriter doing the segment notification, so that the WAL write done by\n> backend would only do a latch set.\n\nWhen were you thinking of doing the latch sets? Adding a latch set for every\nXLogWrite() wouldn't be cheap either. Both because syscalls under a lock\naren't free and because waking up walsender even more often isn't free (we\nalready have a few threads about reducing the signalling frequency).\n\nThere's also the question of what to do with single user mode. We shouldn't\njust skip creating .ready files there...\n\n\nAlthough, the more I think about, the more I am confused about the trailing\n\tif (XLogArchivingActive())\n\t\tNotifySegmentsReadyForArchive(LogwrtResult.Flush);\n\nin XLogWrite(). Shouldn't that at the very least be inside the \"If asked to\nflush, do so\" branch? Outside that and the finishing_seg branch\nLogwrtResult.Flush won't have moved, right? So the call to\nNotifySegmentsReadyForArchive() can't do anything, no?\n\nNor does it seem like we'd ever need to call NotifySegmentsReadyForArchive()\nif we started writing on the current page - flushRecPtr can't move across a\nsegment boundary in that case.\n\n\nI hadn't yet realized that this commit doesn't just make XLogWrite() more\nexpensive, it also makes XLogInsertRecord() more expensive :(. Adding two\ndivisions to XLogInsertRecord() isn't nice, especially as it happens\neven if !XLogArchivingActive().\n\n\nI can't really convince myself this deals correctly with multiple segment\nspanning records and with records spanning more than one segment? It'd be\neasier to understand if the new XLogCtlData variables were documented...\n\nIf there's one record from segment s0 to s1 and one from s1 to s4, and\nwal_buffers is big enough to contain them all, the first record will set\nearliestSegBoundary = s1\nthe second\nlatestSegBoundary = s4.\n\nWhen s1 is fully written out, NotifySegmentsReadyForArchive() will set\nearliestSegBoundary = latestSegBoundary = s4 and create .reaady for .s1 -\nok. But when when s2 is flushed, we'll afaict happily create .ready files\nfor s2, s3 despite s4 not yet being written, because earliestSegBoundary is\nnow s4.\n\nI think there's other issues as well.\n\n\nThe more I look at this commit, the less I believe it's right.\n\n\nThe whole approach here of delaying .ready creation for these types of\nsegments seems wrong to me. Doesn't the exact same problem also exist for\nstreaming rep - which one can also use to maintain a PITR archive? walsender\nsends up to the flush location, and pg_receivewal's FindStreamingStart() will\nafaict just continue receiving from after that point.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Aug 2021 14:05:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/30/21, 2:06 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> When were you thinking of doing the latch sets? Adding a latch set for every\r\n> XLogWrite() wouldn't be cheap either. Both because syscalls under a lock\r\n> aren't free and because waking up walsender even more often isn't free (we\r\n> already have a few threads about reducing the signalling frequency).\r\n>\r\n> There's also the question of what to do with single user mode. We shouldn't\r\n> just skip creating .ready files there...\r\n\r\nGood point.\r\n\r\n> Although, the more I think about, the more I am confused about the trailing\r\n> if (XLogArchivingActive())\r\n> NotifySegmentsReadyForArchive(LogwrtResult.Flush);\r\n>\r\n> in XLogWrite(). Shouldn't that at the very least be inside the \"If asked to\r\n> flush, do so\" branch? Outside that and the finishing_seg branch\r\n> LogwrtResult.Flush won't have moved, right? So the call to\r\n> NotifySegmentsReadyForArchive() can't do anything, no?\r\n\r\nThe registration logic looks like this:\r\n 1. Register boundary\r\n 2. Get flush location from shared memory\r\n 3. If flush location >= our just-registered boundary, nudge\r\n the WAL writer to create .ready files if needed\r\n\r\nIf we called NotifySegmentsReadyForArchive() before we updated the\r\nflush location in shared memory, we might skip nudging the WAL writer\r\neven though we should.\r\n\r\n> Nor does it seem like we'd ever need to call NotifySegmentsReadyForArchive()\r\n> if we started writing on the current page - flushRecPtr can't move across a\r\n> segment boundary in that case.\r\n\r\nI think there is a chance that we've crossed one of our recorded\r\nsegment boundaries anytime the flush pointer moves.\r\n\r\n> When s1 is fully written out, NotifySegmentsReadyForArchive() will set\r\n> earliestSegBoundary = latestSegBoundary = s4 and create .reaady for .s1 -\r\n> ok. But when when s2 is flushed, we'll afaict happily create .ready files\r\n> for s2, s3 despite s4 not yet being written, because earliestSegBoundary is\r\n> now s4.\r\n\r\nIn this case, the .ready files for s2 and s3 wouldn't be created until\r\ns4 is flushed to disk.\r\n\r\n> The whole approach here of delaying .ready creation for these types of\r\n> segments seems wrong to me. Doesn't the exact same problem also exist for\r\n> streaming rep - which one can also use to maintain a PITR archive? walsender\r\n> sends up to the flush location, and pg_receivewal's FindStreamingStart() will\r\n> afaict just continue receiving from after that point.\r\n\r\nThe problem with streaming replication is being discussed in a new\r\nthread [0].\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/202108232252.dh7uxf6oxwcy%40alvherre.pgsql\r\n\r\n",
"msg_date": "Mon, 30 Aug 2021 22:39:04 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/30/21, 3:40 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 8/30/21, 2:06 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n>> Although, the more I think about, the more I am confused about the trailing\r\n>> if (XLogArchivingActive())\r\n>> NotifySegmentsReadyForArchive(LogwrtResult.Flush);\r\n>>\r\n>> in XLogWrite(). Shouldn't that at the very least be inside the \"If asked to\r\n>> flush, do so\" branch? Outside that and the finishing_seg branch\r\n>> LogwrtResult.Flush won't have moved, right? So the call to\r\n>> NotifySegmentsReadyForArchive() can't do anything, no?\r\n>\r\n> The registration logic looks like this:\r\n> 1. Register boundary\r\n> 2. Get flush location from shared memory\r\n> 3. If flush location >= our just-registered boundary, nudge\r\n> the WAL writer to create .ready files if needed\r\n>\r\n> If we called NotifySegmentsReadyForArchive() before we updated the\r\n> flush location in shared memory, we might skip nudging the WAL writer\r\n> even though we should.\r\n\r\nIn the other thread [0], we're considering moving boundary\r\nregistration to before WALInsertLockRelease(). If I'm right that this\r\nremoves the race condition in question, we should be able to move the\r\ncall to NotifySegmentsReadyForArchive() at the end of XLogWrite() to\r\nthe if-asked-to-flush branch.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/DE60B9AA-9670-47DA-9678-6C79BCD884E3%40amazon.com\r\n\r\n",
"msg_date": "Mon, 30 Aug 2021 23:08:42 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-30 22:39:04 +0000, Bossart, Nathan wrote:\n> On 8/30/21, 2:06 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\n> > When were you thinking of doing the latch sets? Adding a latch set for every\n> > XLogWrite() wouldn't be cheap either. Both because syscalls under a lock\n> > aren't free and because waking up walsender even more often isn't free (we\n> > already have a few threads about reducing the signalling frequency).\n> >\n> > There's also the question of what to do with single user mode. We shouldn't\n> > just skip creating .ready files there...\n> \n> Good point.\n> \n> > Although, the more I think about, the more I am confused about the trailing\n> > if (XLogArchivingActive())\n> > NotifySegmentsReadyForArchive(LogwrtResult.Flush);\n> >\n> > in XLogWrite(). Shouldn't that at the very least be inside the \"If asked to\n> > flush, do so\" branch? Outside that and the finishing_seg branch\n> > LogwrtResult.Flush won't have moved, right? So the call to\n> > NotifySegmentsReadyForArchive() can't do anything, no?\n> \n> The registration logic looks like this:\n> 1. Register boundary\n> 2. Get flush location from shared memory\n> 3. If flush location >= our just-registered boundary, nudge\n> the WAL writer to create .ready files if needed\n> \n> If we called NotifySegmentsReadyForArchive() before we updated the\n> flush location in shared memory, we might skip nudging the WAL writer\n> even though we should.\n\nThat's trivial to address - just have a local variable saying whether we need\nto call NotifySegmentsReadyForArchive().\n\nNote that the finishing_seg path currently calls\nNotifySegmentsReadyForArchive() before the shared memory flush location is\nupdated.\n\n\n> > When s1 is fully written out, NotifySegmentsReadyForArchive() will set\n> > earliestSegBoundary = latestSegBoundary = s4 and create .reaady for .s1 -\n> > ok. But when when s2 is flushed, we'll afaict happily create .ready files\n> > for s2, s3 despite s4 not yet being written, because earliestSegBoundary is\n> > now s4.\n> \n> In this case, the .ready files for s2 and s3 wouldn't be created until\n> s4 is flushed to disk.\n\nI don't think that's true as the code stands today? The\nNotifySegmentsReadyForArchive() for s2 will update earliestSegBoundary to s4,\nbecause latestSegBoundary = 4 and earliestSegBoundary = 1, triggering the\nkeep_latest branch. Any subsequent NotifySegmentsReadyForArchive() with a\nsegment < 4 will then be able to flush s2 and s3?\n\n\n> > The whole approach here of delaying .ready creation for these types of\n> > segments seems wrong to me. Doesn't the exact same problem also exist for\n> > streaming rep - which one can also use to maintain a PITR archive? walsender\n> > sends up to the flush location, and pg_receivewal's FindStreamingStart() will\n> > afaict just continue receiving from after that point.\n> \n> The problem with streaming replication is being discussed in a new\n> thread [0].\n\nI don't think it's sensible to fix these separately. It'd be one thing to do\nthat for HEAD, but on the back branches? And that this patch hasn't gotten any\nperformance testing is scary.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Aug 2021 19:38:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/30/21, 7:39 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> On 2021-08-30 22:39:04 +0000, Bossart, Nathan wrote:\r\n>> If we called NotifySegmentsReadyForArchive() before we updated the\r\n>> flush location in shared memory, we might skip nudging the WAL writer\r\n>> even though we should.\r\n>\r\n> That's trivial to address - just have a local variable saying whether we need\r\n> to call NotifySegmentsReadyForArchive().\r\n\r\nI think we can remove the race condition entirely by moving boundary\r\nregistration to before WALInsertLockRelease(). I attached a patch for\r\ndiscussion.\r\n\r\n>> > When s1 is fully written out, NotifySegmentsReadyForArchive() will set\r\n>> > earliestSegBoundary = latestSegBoundary = s4 and create .reaady for .s1 -\r\n>> > ok. But when when s2 is flushed, we'll afaict happily create .ready files\r\n>> > for s2, s3 despite s4 not yet being written, because earliestSegBoundary is\r\n>> > now s4.\r\n>>\r\n>> In this case, the .ready files for s2 and s3 wouldn't be created until\r\n>> s4 is flushed to disk.\r\n>\r\n> I don't think that's true as the code stands today? The\r\n> NotifySegmentsReadyForArchive() for s2 will update earliestSegBoundary to s4,\r\n> because latestSegBoundary = 4 and earliestSegBoundary = 1, triggering the\r\n> keep_latest branch. Any subsequent NotifySegmentsReadyForArchive() with a\r\n> segment < 4 will then be able to flush s2 and s3?\r\n\r\nWhen flushRecPtr is less than both of the segment boundaries,\r\nNotifySegmentsReadyForArchive() will return without doing anything.\r\nAt least, that was the intent. If there is some reason it's not\r\nactually working that way, I can work on fixing it.\r\n\r\n> I don't think it's sensible to fix these separately. It'd be one thing to do\r\n> that for HEAD, but on the back branches? And that this patch hasn't gotten any\r\n> performance testing is scary.\r\n\r\nAre there any specific performance tests you'd like to see? I don't\r\nmind running a couple.\r\n\r\nNathan",
"msg_date": "Tue, 31 Aug 2021 06:45:06 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi\n\nOn 2021-08-31 06:45:06 +0000, Bossart, Nathan wrote:\n> On 8/30/21, 7:39 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\n> > On 2021-08-30 22:39:04 +0000, Bossart, Nathan wrote:\n> >> If we called NotifySegmentsReadyForArchive() before we updated the\n> >> flush location in shared memory, we might skip nudging the WAL writer\n> >> even though we should.\n> >\n> > That's trivial to address - just have a local variable saying whether we need\n> > to call NotifySegmentsReadyForArchive().\n>\n> I think we can remove the race condition entirely by moving boundary\n> registration to before WALInsertLockRelease(). I attached a patch for\n> discussion.\n\nI think it's a bad idea to move more code to before\nWALInsertLockRelease. There's a very limited number of xlog insert slots, and\nWAL flushes (e.g. commits) need to wait for insertions to finish.\n\n\n> >> > When s1 is fully written out, NotifySegmentsReadyForArchive() will set\n> >> > earliestSegBoundary = latestSegBoundary = s4 and create .reaady for .s1 -\n> >> > ok. But when when s2 is flushed, we'll afaict happily create .ready files\n> >> > for s2, s3 despite s4 not yet being written, because earliestSegBoundary is\n> >> > now s4.\n> >>\n> >> In this case, the .ready files for s2 and s3 wouldn't be created until\n> >> s4 is flushed to disk.\n> >\n> > I don't think that's true as the code stands today? The\n> > NotifySegmentsReadyForArchive() for s2 will update earliestSegBoundary to s4,\n> > because latestSegBoundary = 4 and earliestSegBoundary = 1, triggering the\n> > keep_latest branch. Any subsequent NotifySegmentsReadyForArchive() with a\n> > segment < 4 will then be able to flush s2 and s3?\n>\n> When flushRecPtr is less than both of the segment boundaries,\n> NotifySegmentsReadyForArchive() will return without doing anything.\n> At least, that was the intent. If there is some reason it's not\n> actually working that way, I can work on fixing it.\n\nBut that's not OK either! Consider a scenario when there's small records each\nspanning just a bit into the next segment, and initially all the data is in\nwal_buffers.\n\nRegisterSegmentBoundary(s1, s1+10)\nearliestSegBoundary = s1\nearliestSegBoundaryEndPtr = s1+10\n\nRegisterSegmentBoundary(s2, s2+10)\nearliestSegBoundary = s1\nearliestSegBoundaryEndPtr = s1+10\nlatestSegBoundary = s2\nlatestSegBoundaryEndPtr = s2 + 10\n\nRegisterSegmentBoundary(s3, s3+10)\nearliestSegBoundary = s1\nearliestSegBoundaryEndPtr = s1+10\nlatestSegBoundary = s2\nlatestSegBoundaryEndPtr = s2 + 10\n\nRegisterSegmentBoundary(s4, s4+10)\nearliestSegBoundary = s1\nearliestSegBoundaryEndPtr = s1+10\nlatestSegBoundary = s4\nlatestSegBoundaryEndPtr = s4 + 10\n\nIf there's now a flush request including all of s3, we'll have the following\nsequence of notifies:\n\nNotifySegmentsReadyForArchive(s1)\nnothing happens, smaller than s1+10\n\nNotifySegmentsReadyForArchive(s2)\nearliestSegBoundary = s4\nearliestSegBoundaryEndPtr = s4+10\nlatestSegBoundary = s4\nlatestSegBoundaryEndPtr = s4 + 10\nlatest_boundary_seg = s1\n\nNotifySegmentsReadyForArchive(s3)\nnothing happens, flush is smaller than s4\n\nIf the record ending at s4 + 10 isn't an async commit (and thus\nXLogCtl->asyncXactLSN is smaller), and there are no further records, we can\nend up waiting effectively forever for s2 (and s3) to be archived. If all\nother connections (and autovac etc) are idle, there will be no XLogFlush()\ncalls, nor will XLogBackgroundFlush() do anything, because it'll hit the \"If\nalready known flushed\" path, because the the first page in s4 is only\npartially filled.\n\nAm I missing something?\n\n\n> > I don't think it's sensible to fix these separately. It'd be one thing to do\n> > that for HEAD, but on the back branches? And that this patch hasn't gotten any\n> > performance testing is scary.\n>\n> Are there any specific performance tests you'd like to see? I don't\n> mind running a couple.\n\n- Parallel copy with > 8 processes\n\n- Parallel non-transactional insertion of small-medium records\n Simulates inserting rows within a transaction\n- Parallel transactional insertion of small-medium sized records, with fsync=on\n Plain oltp writes\n- Parallel transactional insertion of small-medium sized records, with fsync=off\n fsync=off to simulate a fast server-class SSD (where fsync is\n instantaneous). Of course, if you have one of those, you can also use that.\n\nFor the oltp ones I've had good experience simulating workloads with\npg_logical_emit_message(). That just hits the WAL path, *drastically* reducing\nthe variance / shortening the required test duration.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 31 Aug 2021 00:43:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/31/21, 12:44 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> If there's now a flush request including all of s3, we'll have the following\r\n> sequence of notifies:\r\n>\r\n> NotifySegmentsReadyForArchive(s1)\r\n> nothing happens, smaller than s1+10\r\n>\r\n> NotifySegmentsReadyForArchive(s2)\r\n> earliestSegBoundary = s4\r\n> earliestSegBoundaryEndPtr = s4+10\r\n> latestSegBoundary = s4\r\n> latestSegBoundaryEndPtr = s4 + 10\r\n> latest_boundary_seg = s1\r\n>\r\n> NotifySegmentsReadyForArchive(s3)\r\n> nothing happens, flush is smaller than s4\r\n\r\nWhen earliestSegBoundary is set to s4, latestSegBoundary will be set\r\nto MaxXLogSegNo.\r\n\r\n> If the record ending at s4 + 10 isn't an async commit (and thus\r\n> XLogCtl->asyncXactLSN is smaller), and there are no further records, we can\r\n> end up waiting effectively forever for s2 (and s3) to be archived. If all\r\n> other connections (and autovac etc) are idle, there will be no XLogFlush()\r\n> calls, nor will XLogBackgroundFlush() do anything, because it'll hit the \"If\r\n> already known flushed\" path, because the the first page in s4 is only\r\n> partially filled.\r\n\r\nI'm not following why s4 wouldn't be flushed in this example. Even if\r\nthe first page in s4 is only partially filled, that portion of the\r\nrecord should still get flushed, and we'll create the .ready files for\r\ns2 and s3 at that time. I tested this by adding some debug logging\r\nand creating a small record that crossed segment boundaries but didn't\r\nfill the first page on the next segment, and the .ready file was\r\ncreated as expected. Is there a case where we wouldn't flush the end\r\nof the record to disk?\r\n\r\nDuring my testing, I did find an obvious bug. We probably shouldn't\r\nbe calling NotifySegmentsReadyForArchive() when archiving isn't\r\nenabled.\r\n\r\ndiff --git a/src/backend/postmaster/walwriter.c b/src/backend/postmaster/walwriter.c\r\nindex 6a1e16edc2..8ca0d8e616 100644\r\n--- a/src/backend/postmaster/walwriter.c\r\n+++ b/src/backend/postmaster/walwriter.c\r\n@@ -253,7 +253,8 @@ WalWriterMain(void)\r\n * here to handle a race condition where WAL is flushed to disk prior\r\n * to registering the segment boundary.\r\n */\r\n- NotifySegmentsReadyForArchive(GetFlushRecPtr());\r\n+ if (XLogArchivingActive())\r\n+ NotifySegmentsReadyForArchive(GetFlushRecPtr());\r\n\r\n /*\r\n * Do what we're here for; then, if XLogBackgroundFlush() found useful\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 31 Aug 2021 17:01:31 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-31 17:01:31 +0000, Bossart, Nathan wrote:\n> > If the record ending at s4 + 10 isn't an async commit (and thus\n> > XLogCtl->asyncXactLSN is smaller), and there are no further records, we can\n> > end up waiting effectively forever for s2 (and s3) to be archived. If all\n> > other connections (and autovac etc) are idle, there will be no XLogFlush()\n> > calls, nor will XLogBackgroundFlush() do anything, because it'll hit the \"If\n> > already known flushed\" path, because the the first page in s4 is only\n> > partially filled.\n> \n> I'm not following why s4 wouldn't be flushed in this example. Even if\n> the first page in s4 is only partially filled, that portion of the\n> record should still get flushed, and we'll create the .ready files for\n> s2 and s3 at that time.\n\nWhat would trigger the flushing? We don't write out partially filled pages\nunless\na) we're explicitly flushing an LSN on the partial page (e.g. because a\n synchronous commit record resides on it)\nb) there's an async commit (i.e. commit with synchronous_commit=off) on the\n page\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 31 Aug 2021 10:18:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/31/21, 10:21 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> What would trigger the flushing? We don't write out partially filled pages\r\n> unless\r\n> a) we're explicitly flushing an LSN on the partial page (e.g. because a\r\n> synchronous commit record resides on it)\r\n> b) there's an async commit (i.e. commit with synchronous_commit=off) on the\r\n> page\r\n\r\nAh, so your point is that an open transaction that has written a\r\npartial page on the next segment wouldn't trigger a flush. What\r\nappears to happen in this case is that bgwriter eventually creates a\r\nxl_running_xacts record and nudges walwriter to flush it to disk, at\r\nwhich point the .ready file(s) will be created. That's admittedly a\r\nbit fragile.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 31 Aug 2021 18:09:36 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-31 18:09:36 +0000, Bossart, Nathan wrote:\n> On 8/31/21, 10:21 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\n> > What would trigger the flushing? We don't write out partially filled pages\n> > unless\n> > a) we're explicitly flushing an LSN on the partial page (e.g. because a\n> > synchronous commit record resides on it)\n> > b) there's an async commit (i.e. commit with synchronous_commit=off) on the\n> > page\n> \n> Ah, so your point is that an open transaction that has written a\n> partial page on the next segment wouldn't trigger a flush.\n\nDoesn't have to be a transaction, can be a checkpoint or xl_running_xacts, or\n... as well.\n\n\n> What appears to happen in this case is that bgwriter eventually creates a\n> xl_running_xacts record and nudges walwriter to flush it to disk, at which\n> point the .ready file(s) will be created. That's admittedly a bit fragile.\n\nThat's not guaranteed to happen. If e.g. the partial record is a checkpoint or\na xl_running_xacts, we'll not trigger further WAL writes in the background,\nunless autovacuum ends up doing something.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 31 Aug 2021 13:29:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 8/31/21, 1:30 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> On 2021-08-31 18:09:36 +0000, Bossart, Nathan wrote:\r\n>> What appears to happen in this case is that bgwriter eventually creates a\r\n>> xl_running_xacts record and nudges walwriter to flush it to disk, at which\r\n>> point the .ready file(s) will be created. That's admittedly a bit fragile.\r\n>\r\n> That's not guaranteed to happen. If e.g. the partial record is a checkpoint or\r\n> a xl_running_xacts, we'll not trigger further WAL writes in the background,\r\n> unless autovacuum ends up doing something.\r\n\r\nRight. Per the attached patch, a simple way to handle that could be\r\nto teach XLogBackgroundFlush() to flush to the \"earliest\" segment\r\nboundary if it doesn't find anything else to do. I think you could\r\nstill miss creating a .ready file for the previous segment in single-\r\nuser mode, though.\r\n\r\nNathan",
"msg_date": "Tue, 31 Aug 2021 23:31:15 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-08-31 23:31:15 +0000, Bossart, Nathan wrote:\n> On 8/31/21, 1:30 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\n> > On 2021-08-31 18:09:36 +0000, Bossart, Nathan wrote:\n> >> What appears to happen in this case is that bgwriter eventually creates a\n> >> xl_running_xacts record and nudges walwriter to flush it to disk, at which\n> >> point the .ready file(s) will be created. That's admittedly a bit fragile.\n> >\n> > That's not guaranteed to happen. If e.g. the partial record is a checkpoint or\n> > a xl_running_xacts, we'll not trigger further WAL writes in the background,\n> > unless autovacuum ends up doing something.\n> \n> Right. Per the attached patch, a simple way to handle that could be\n> to teach XLogBackgroundFlush() to flush to the \"earliest\" segment\n> boundary if it doesn't find anything else to do. I think you could\n> still miss creating a .ready file for the previous segment in single-\n> user mode, though.\n\nMaybe, but this is getting uglier and uglier.\n\nI think patch should be reverted. It's not in a state that's appropriate for\nthe backbranches.\n\n\n",
"msg_date": "Tue, 31 Aug 2021 17:23:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Aug-31, Andres Freund wrote:\n\n> Maybe, but this is getting uglier and uglier.\n> \n> I think patch should be reverted. It's not in a state that's appropriate for\n> the backbranches.\n\nYeah, that's becoming my conclusion too -- undo that, and start from\nscratch using the other idea.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 31 Aug 2021 20:52:05 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On Tue, Aug 31, 2021 at 08:52:05PM -0400, alvherre@alvh.no-ip.org wrote:\n> Yeah, that's becoming my conclusion too -- undo that, and start from\n> scratch using the other idea.\n\nThat's about 515e3d8, right? I have not looked in details at what you\nhave here, but this produces a compilation warning on Windows for me\nwith this part of the patch:\n+RegisterSegmentBoundary(XLogSegNo seg, XLogRecPtr endpos)\n+{\n+ XLogSegNo segno PG_USED_FOR_ASSERTS_ONLY;\n\nsegno gets to be an unreferenced local variable.\n--\nMichael",
"msg_date": "Wed, 1 Sep 2021 10:55:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
},
{
"msg_contents": "On 2021-Sep-01, Michael Paquier wrote:\n\n> That's about 515e3d8, right?\n\nYes.\n\n> I have not looked in details at what you have here, but this produces\n> a compilation warning on Windows for me with this part of the patch:\n\nThis seems a tiny speck in a sea of bogosity. If you want to silence\nthe warning, be my guest, but in the long run I am inclined to revert\nthe whole commit once I have a better picture of a way forward.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 31 Aug 2021 22:15:27 -0400",
"msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: archive status \".ready\" files may be created too early"
}
] |
[
{
"msg_contents": "On 12.1, fresh initdb the following query gives me the error\n\"ERROR: failed to construct the join relation\"\n\n SELECT FROM (\n SELECT FROM pg_catalog.pg_stat_bgwriter AS ref_0\n LEFT JOIN pg_catalog.pg_stat_bgwriter AS ref_1 ON (true), LATERAL (\n SELECT FROM pg_catalog.pg_publication AS ref_2, LATERAL (\n SELECT FROM pg_catalog.pg_class\n WHERE ref_1.buffers_alloc IS NOT NULL\n ) AS subq_0\n WHERE true\n LIMIT 1\n ) AS subq_1\n WHERE true\n ) AS subq_2\n\nIf you move the limit up into subq_0, then the error changes to\n\"ERROR: failed to build any 2-way joins\"\n\n SELECT FROM (\n SELECT FROM pg_catalog.pg_stat_bgwriter AS ref_0\n LEFT JOIN pg_catalog.pg_stat_bgwriter AS ref_1 ON (true), LATERAL (\n SELECT FROM pg_catalog.pg_publication AS ref_2, LATERAL (\n SELECT FROM pg_catalog.pg_class\n WHERE ref_1.buffers_alloc IS NOT NULL\n LIMIT 1\n ) AS subq_0\n WHERE true\n ) AS subq_1\n WHERE true\n ) AS subq_2\n\nI'm unable to reproduce either of the errors on 11.6 or 11.4. I haven't tried\nany other versions. The actual value of the limit doesn't appear to matter,\njust if it's present or not.\n\n— Will\n\n\n",
"msg_date": "Thu, 12 Dec 2019 15:32:05 -0800",
"msg_from": "Will Leinweber <will@bitfission.com>",
"msg_from_op": true,
"msg_subject": "Errors \"failed to construct the join relation\" and \"failed to build\n any 2-way joins\""
},
{
"msg_contents": "Will Leinweber <will@bitfission.com> writes:\n> On 12.1, fresh initdb the following query gives me the error\n> \"ERROR: failed to construct the join relation\"\n\nI'm getting an assertion failure in an assert-enabled build, here:\n\n(gdb) f 3\n#3 0x00000000006f382a in create_lateral_join_info (root=0x2d380c8)\n at initsplan.c:637\n637 Assert(!bms_is_member(rti, lateral_relids));\n\nEyeing the plan produced by v11, I'm suspecting some oversight in\nthe RTE_RESULT changes (4be058fe9); but I haven't actually bisected.\nToo tired to look closer right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Dec 2019 01:26:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Errors \"failed to construct the join relation\" and \"failed to\n build any 2-way joins\""
},
{
"msg_contents": "I wrote:\n> Will Leinweber <will@bitfission.com> writes:\n>> On 12.1, fresh initdb the following query gives me the error\n>> \"ERROR: failed to construct the join relation\"\n\n> Eyeing the plan produced by v11, I'm suspecting some oversight in\n> the RTE_RESULT changes (4be058fe9); but I haven't actually bisected.\n\nYup: it's folding the join tree to the point where a PlaceHolderVar ends\nup marked as to be evaluated by the same relation that uses it, and then\nthings go all pear-shaped. Here's a proposed patch for that.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 13 Dec 2019 18:09:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Errors \"failed to construct the join relation\" and \"failed to\n build any 2-way joins\""
}
] |
[
{
"msg_contents": "On Thu, Dec 12, 2019, Justin Pryzby wrote in 20191212180506.GR2082@telsasoft.com:\n> Actually, I tried using pg_ls_tmpdir(), but it unconditionally masks\n> non-regular files and thus shared filesets. Maybe that's worth discussion on a\n> new thread ?\n> \n> src/backend/utils/adt/genfile.c\n> /* Ignore anything but regular files */\n> if (!S_ISREG(attrib.st_mode))\n> continue;\n\nI suggested that pg_ls_tmpdir should show shared filesets like\n> 169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n\nShould it have an extra column for the parent dir (null for nonshared filesets).\nMaybe it would only show 1) files; and, 2) parents named\npgsql_tmp[0-9.]+.sharedfileset; and maybe, 3) files directly underneath (2).\n\nOr should it require an argument to show them? pg_ls_tmpdir(sharedfileset=False)\n\nThat allows enumerating an entire directory and its subdirs, except hidden\nfiles, and probably except files more than one level deep. I guess pg_ls_dir\nalready allows that.\n\nActually, my suggestion would be to make pg_ls_tmpdir expose \"isdir\", same as\npg_stat_file.\n\nThat's already possible using pg_ls_dir:\n\npostgres=# SELECT name, s.size, s.modification, s.isdir FROM (SELECT 'base/pgsql_tmp'p)p, pg_ls_dir(p)name, pg_stat_file(p||'/'||name)s;\n name | size | modification | isdir \n--------------------------------+------+------------------------+-------\n pgsql_tmp11025.0.sharedfileset | 4096 | 2019-12-07 01:35:56-06 | t\n\npg_tmpdir() might return (name,size,modtime), or perhaps\n(name,isdir,size,modtime), which would be more likely to cause deliberate\nbreakage if someone assumed that record[1] was the size and rest of result was\nsame...\n\nJustin\n\n\n",
"msg_date": "Thu, 12 Dec 2019 23:39:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg_ls_tmpdir to show shared filesets"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 11:39:31PM -0600, Justin Pryzby wrote:\n> I suggested that pg_ls_tmpdir should show shared filesets like\n> > 169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n..\n> Actually, my suggestion would be to make pg_ls_tmpdir expose \"isdir\", same as\n> pg_stat_file.\n\nDone like that",
"msg_date": "Sat, 14 Dec 2019 16:47:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show shared filesets"
}
] |
[
{
"msg_contents": "Hello!\n\nAccording to discussion at pgsql-general (\nhttps://www.postgresql.org/message-id/flat/CAFp7QwqFYcHiARfT91rOQj%3DmFT0MWBE%2BkxEmjfQh3QmRN1UBiw%40mail.gmail.com#05b75be4fd11c0e6216f0b329c808f72)\nI\nhave prepared patch to improve documentation for REINDEX. It should be more\ninline with another documentation pages.\n\nYou can see the change applied in attached file. Patch can be found at\nhttps://github.com/simi/postgres/pull/3 (diff -\nhttps://github.com/simi/postgres/pull/3.diff, patch -\nhttps://github.com/simi/postgres/pull/3.patch).\n\nThis change is based on idea of Pave Stěhule, thanks a lot for that!\nSimilar approach was used recently in\nhttps://www.postgresql.org/docs/devel/sql-dropdatabase.html.",
"msg_date": "Fri, 13 Dec 2019 10:28:33 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Improve documentation of REINDEX options"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 10:28:33AM +0100, Josef Šimánek wrote:\n> I have prepared patch to improve documentation for REINDEX. It\n> should be more inline with another documentation pages.\n> \n> You can see the change applied in attached file. Patch can be found at\n> https://github.com/simi/postgres/pull/3 (diff -\n> https://github.com/simi/postgres/pull/3.diff, patch -\n> https://github.com/simi/postgres/pull/3.patch).\n\nPlease, always attach your patches to emails sent on this mailing\nlist. If for a reason or another, the data located to with external\nlink is lost (imagine for example that your github account is gone or\nthat github is reduced to ashes), then such patches would be lost, and\nanybody looking at this email 10 years from now would not know what\nyou have been writing about here. I am attaching it here for the\narchive's sake.\n\n+<phrase>where <replaceable class=\"parameter\">option</replaceable> can\nbe:</phrase>\n+\n+ VERBOSE\nWhy not... We did that in the docs of ANALYZE for v11 when\nintroducing the parenthesized grammar flavor for the options\navailable.\n\n- Rebuild all the indexes on the table <literal>my_table</literal>:\n+ Rebuild all the indexes on the table <literal>my_table</literal>\n with progress report per index:\n\n <programlisting>\n-REINDEX TABLE my_table;\n+REINDEX (VERBOSE) TABLE my_table;\nNot sure if this part brings much to the reader though. It is not\nlike the command description of REINDEX is complicated with dozens\nof option choices.\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 14:36:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve documentation of REINDEX options"
},
{
"msg_contents": "út 17. 12. 2019 v 6:36 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Fri, Dec 13, 2019 at 10:28:33AM +0100, Josef Šimánek wrote:\n> > I have prepared patch to improve documentation for REINDEX. It\n> > should be more inline with another documentation pages.\n> >\n> > You can see the change applied in attached file. Patch can be found at\n> > https://github.com/simi/postgres/pull/3 (diff -\n> > https://github.com/simi/postgres/pull/3.diff, patch -\n> > https://github.com/simi/postgres/pull/3.patch).\n>\n> Please, always attach your patches to emails sent on this mailing\n> list. If for a reason or another, the data located to with external\n> link is lost (imagine for example that your github account is gone or\n> that github is reduced to ashes), then such patches would be lost, and\n> anybody looking at this email 10 years from now would not know what\n> you have been writing about here. I am attaching it here for the\n> archive's sake.\n>\n\nSorry, I'm attaching the same patch now for future reference.\n\n\n>\n> +<phrase>where <replaceable class=\"parameter\">option</replaceable> can\n> be:</phrase>\n> +\n> + VERBOSE\n> Why not... We did that in the docs of ANALYZE for v11 when\n> introducing the parenthesized grammar flavor for the options\n> available.\n>\n> - Rebuild all the indexes on the table <literal>my_table</literal>:\n> + Rebuild all the indexes on the table <literal>my_table</literal>\n> with progress report per index:\n>\n> <programlisting>\n> -REINDEX TABLE my_table;\n> +REINDEX (VERBOSE) TABLE my_table;\n> Not sure if this part brings much to the reader though. It is not\n> like the command description of REINDEX is complicated with dozens\n> of option choices.\n> --\n> Michael\n>",
"msg_date": "Tue, 17 Dec 2019 10:09:58 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve documentation of REINDEX options"
},
{
"msg_contents": "út 17. 12. 2019 v 6:36 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Fri, Dec 13, 2019 at 10:28:33AM +0100, Josef Šimánek wrote:\n> > I have prepared patch to improve documentation for REINDEX. It\n> > should be more inline with another documentation pages.\n> >\n> > You can see the change applied in attached file. Patch can be found at\n> > https://github.com/simi/postgres/pull/3 (diff -\n> > https://github.com/simi/postgres/pull/3.diff, patch -\n> > https://github.com/simi/postgres/pull/3.patch).\n>\n> Please, always attach your patches to emails sent on this mailing\n> list. If for a reason or another, the data located to with external\n> link is lost (imagine for example that your github account is gone or\n> that github is reduced to ashes), then such patches would be lost, and\n> anybody looking at this email 10 years from now would not know what\n> you have been writing about here. I am attaching it here for the\n> archive's sake.\n>\n> +<phrase>where <replaceable class=\"parameter\">option</replaceable> can\n> be:</phrase>\n> +\n> + VERBOSE\n> Why not... We did that in the docs of ANALYZE for v11 when\n> introducing the parenthesized grammar flavor for the options\n> available.\n>\n> - Rebuild all the indexes on the table <literal>my_table</literal>:\n> + Rebuild all the indexes on the table <literal>my_table</literal>\n> with progress report per index:\n>\n> <programlisting>\n> -REINDEX TABLE my_table;\n> +REINDEX (VERBOSE) TABLE my_table;\n> Not sure if this part brings much to the reader though. It is not\n> like the command description of REINDEX is complicated with dozens\n> of option choices.\n>\n\nFor me this is the default way how to reindex whole table manually in psql\nsince you get some \"progress\". Anyway I can remove it if you don't see any\nbenefit in extending this example.\n\n\n> --\n> Michael\n>\n\nút 17. 12. 2019 v 6:36 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Fri, Dec 13, 2019 at 10:28:33AM +0100, Josef Šimánek wrote:\n> I have prepared patch to improve documentation for REINDEX. It\n> should be more inline with another documentation pages.\n> \n> You can see the change applied in attached file. Patch can be found at\n> https://github.com/simi/postgres/pull/3 (diff -\n> https://github.com/simi/postgres/pull/3.diff, patch -\n> https://github.com/simi/postgres/pull/3.patch).\n\nPlease, always attach your patches to emails sent on this mailing\nlist. If for a reason or another, the data located to with external\nlink is lost (imagine for example that your github account is gone or\nthat github is reduced to ashes), then such patches would be lost, and\nanybody looking at this email 10 years from now would not know what\nyou have been writing about here. I am attaching it here for the\narchive's sake.\n\n+<phrase>where <replaceable class=\"parameter\">option</replaceable> can\nbe:</phrase>\n+\n+ VERBOSE\nWhy not... We did that in the docs of ANALYZE for v11 when\nintroducing the parenthesized grammar flavor for the options\navailable.\n\n- Rebuild all the indexes on the table <literal>my_table</literal>:\n+ Rebuild all the indexes on the table <literal>my_table</literal>\n with progress report per index:\n\n <programlisting>\n-REINDEX TABLE my_table;\n+REINDEX (VERBOSE) TABLE my_table;\nNot sure if this part brings much to the reader though. It is not\nlike the command description of REINDEX is complicated with dozens\nof option choices.For me this is the default way how to reindex whole table manually in psql since you get some \"progress\". Anyway I can remove it if you don't see any benefit in extending this example. \n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 10:23:51 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve documentation of REINDEX options"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 10:23:51AM +0100, Josef Šimánek wrote:\n> For me this is the default way how to reindex whole table manually in psql\n> since you get some \"progress\". Anyway I can remove it if you don't see any\n> benefit in extending this example.\n\nI have thought more about this one through the night, and I am still\nnot sure that this brings much. So, committed a simpler version\nhandling only the option part.\n--\nMichael",
"msg_date": "Wed, 18 Dec 2019 11:11:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve documentation of REINDEX options"
}
] |
[
{
"msg_contents": "I noticed this in the regression test while polishing the PWJ-enhancement patch:\n\n-- partitionwise join can not be applied for a join between list and range\n-- partitioned tables\nEXPLAIN (COSTS OFF)\nSELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 FULL JOIN prt1 t2 ON (t1.c = t2.c);\n\nThe test doesn't match the comment which precedes it, because both\ntables are range-partitioned as shown below.\n\n\\d+ prt1_n\n Partitioned table \"public.prt1_n\"\n Column | Type | Collation | Nullable | Default | Storage\n | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain\n | |\n b | integer | | | | plain\n | |\n c | character varying | | | |\nextended | |\nPartition key: RANGE (c)\nPartitions: prt1_n_p1 FOR VALUES FROM ('0000') TO ('0250'),\n prt1_n_p2 FOR VALUES FROM ('0250') TO ('0500')\n\n\\d+ prt1\n Partitioned table \"public.prt1\"\n Column | Type | Collation | Nullable | Default | Storage\n | Stats target | Description\n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n a | integer | | | | plain\n | |\n b | integer | | | | plain\n | |\n c | character varying | | | |\nextended | |\nPartition key: RANGE (a)\nPartitions: prt1_p1 FOR VALUES FROM (0) TO (250),\n prt1_p2 FOR VALUES FROM (250) TO (500),\n prt1_p3 DEFAULT\n\nI think the test should be moved to a more appropriate place, and the\ncomment should be moved to a test that really performs a join between\nlist and range partitioned tables. Attached is a patch for that. The\npatch fixes another misplaced comment as well.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 13 Dec 2019 19:17:55 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unmatched test and comment in partition_join.sql regression test"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 7:17 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I noticed this in the regression test while polishing the PWJ-enhancement patch:\n>\n> -- partitionwise join can not be applied for a join between list and range\n> -- partitioned tables\n> EXPLAIN (COSTS OFF)\n> SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 FULL JOIN prt1 t2 ON (t1.c = t2.c);\n>\n> The test doesn't match the comment which precedes it, because both\n> tables are range-partitioned as shown below.\n\n> I think the test should be moved to a more appropriate place,\n\nOn second thought I changed my mind; we would not need to move that\ntest, so I refrained from doing so.\n\n> Attached is a patch for that. The\n> patch fixes another misplaced comment as well.\n\nI pushed an updated version of the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 16 Dec 2019 17:19:36 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unmatched test and comment in partition_join.sql regression test"
}
] |
[
{
"msg_contents": "Keeping pg_config.h.win32 up to date with pg_config.h.in is a gratuitous \nannoyance. This setup dates back to the minimal client-only Windows \nbuilds using win32.mak files, which has been removed in PG10. The MSVC \nbuild system has the power of Perl available, so we can do better.\n\nMy proposal is that we essentially emulate what config.status does in \nPerl code. config.status gets a list of defines discovered by configure \nand processes pg_config.h.in to pg_config.h by substituting the defines. \n The MSVC build system basically has those defines hardcoded, but the \nprocessing we can do in just the same way. It already had code to do a \nbit of that anyway, so it's really not a big leap. See attached \npatches. (I put the remove of pg_config.h.win32 into a separate patch \nso that reviewers can just apply the first patch and then diff the \nproduced pg_config.h with the existing pg_config.h.win32.)\n\nThe only thing that's not quite explainable is that the existing code \nwrapped some parts of the pg_config.h it generated into an #ifndef \nIGNORE_CONFIGURED_SETTINGS block. I don't see that referenced or used \nanywhere else. The original commit (fb8155d0d) claimed this was \"to be \nused by the installer\", but I didn't find any reference in the current \ninstaller's Git repository either. I suspect this is obsolete.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 13 Dec 2019 13:51:55 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On 13/12/2019 14:51, Peter Eisentraut wrote:\n> Keeping pg_config.h.win32 up to date with pg_config.h.in is a gratuitous\n> annoyance.\n\nHear hear!\n\n> My proposal is that we essentially emulate what config.status does in\n> Perl code. config.status gets a list of defines discovered by configure\n> and processes pg_config.h.in to pg_config.h by substituting the defines.\n> The MSVC build system basically has those defines hardcoded, but the\n> processing we can do in just the same way. It already had code to do a\n> bit of that anyway, so it's really not a big leap. See attached\n> patches. (I put the remove of pg_config.h.win32 into a separate patch\n> so that reviewers can just apply the first patch and then diff the\n> produced pg_config.h with the existing pg_config.h.win32.)\n\nSounds good. I hadn't realized we already had the infrastructure ready \nfor this.\n\nA couple of minor comments:\n\n > +\t\tprint $o \"/* src/include/pg_config.h. Generated from \npg_config.h.in by \", basename(__FILE__), \". */\\n\";\n\nHow about just hardcoding this to \"Generated from pg_config.h.in by \nSolution.pm\". Using basename(__FILE__) seems overly cute.\n\n> +\t\tmy @simple_defines = qw(\n> +\t\t\tHAVE_ATOMICS\n> +\t\t\t... long list ...\n> +\t\t\tUSE_WIN32_SHARED_MEMORY\n> +\t\t );\n> +\n> +\t\tforeach my $k (@simple_defines)\n> +\t\t{\n> +\t\t\t$define{$k} = 1;\n> +\t\t}\n\nI don't think this @simple_defines is really any better than listing all \nthe options directly with \"$define{HAVE_ATOMICS} = 1\". And some simple \ndefines are already listed like that, e.g. HAVE_DECL_STRNLEN above that \nlist.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 13 Dec 2019 15:14:08 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 03:14:08PM +0200, Heikki Linnakangas wrote:\n> On 13/12/2019 14:51, Peter Eisentraut wrote:\n>> Keeping pg_config.h.win32 up to date with pg_config.h.in is a gratuitous\n>> annoyance.\n> \n> Hear hear!\n\nYoupi.\n\n> I don't think this @simple_defines is really any better than listing all the\n> options directly with \"$define{HAVE_ATOMICS} = 1\". And some simple defines\n> are already listed like that, e.g. HAVE_DECL_STRNLEN above that list.\n\nAgreed.\n\nIt would be nice to put a comment close to FLEXIBLE_ARRAY_MEMBER,\nwhere you use \"/* */\" as a way to emulate an empty value which is\nstill defined. Or would it be cleaner to just use an empty string?\n--\nMichael",
"msg_date": "Fri, 13 Dec 2019 22:44:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Dec 13, 2019 at 03:14:08PM +0200, Heikki Linnakangas wrote:\n>> On 13/12/2019 14:51, Peter Eisentraut wrote:\n>>> Keeping pg_config.h.win32 up to date with pg_config.h.in is a gratuitous\n>>> annoyance.\n\n>> Hear hear!\n\n> Youpi.\n\n+1\n\n>> I don't think this @simple_defines is really any better than listing all the\n>> options directly with \"$define{HAVE_ATOMICS} = 1\". And some simple defines\n>> are already listed like that, e.g. HAVE_DECL_STRNLEN above that list.\n\n> Agreed.\n\nYeah, having one style for setting a variable is better than having two.\n\nOne thing that disturbs me slightly is that the plan seems to be to\nnot mention variables in this list at all if they're to be undefined\non Windows. I realize that we've frequently done that by omission in\npg_config.h.win32, but I don't think it's good practice: it encourages\nfailure to think about how such variables need to be set on Windows.\n\nWould it be reasonable to require every symbol found in pg_config.h.in\nto be explicitly mentioned here? We could put the ones that are to\nend up undefined in a separate %undefine hash, or we could have a\nconvention that an empty value in %define means to #undef it (though\nI suppose that might be awkward in a few cases).\n\nEither way, though, we'd end up with a situation where adding a new\nconfigure symbol always requires touching Solution.pm, where before\nit required touching pg_config.h.win32 (at least if you were being\nstrict about it). So in some sense this is no improvement. But we\ndo have the ability with this to do some computation to select the\nvariable value, so that's good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Dec 2019 08:56:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On 2019-12-13 14:44, Michael Paquier wrote:\n> It would be nice to put a comment close to FLEXIBLE_ARRAY_MEMBER,\n> where you use \"/* */\" as a way to emulate an empty value which is\n> still defined. Or would it be cleaner to just use an empty string?\n\nThat's just the way Autoconf does it. I haven't pondered why it's done \nthat way, only focusing on making the resulting files match.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Dec 2019 15:05:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On 2019-12-13 14:56, Tom Lane wrote:\n> One thing that disturbs me slightly is that the plan seems to be to\n> not mention variables in this list at all if they're to be undefined\n> on Windows. I realize that we've frequently done that by omission in\n> pg_config.h.win32, but I don't think it's good practice: it encourages\n> failure to think about how such variables need to be set on Windows.\n\nOK, here is an updated patch set that has all defines in one big Perl \nhash, and also requires that all symbols in pg_config.h.in are accounted \nfor. (The indentation is from pgperltidy.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 16 Dec 2019 13:12:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 01:12:27PM +0100, Peter Eisentraut wrote:\n> OK, here is an updated patch set that has all defines in one big Perl hash,\n> and also requires that all symbols in pg_config.h.in are accounted for.\n> (The indentation is from pgperltidy.)\n\nThe patch looks pretty clean. I have a few minor comments.\n\n- if (/^AC_INIT\\(\\[PostgreSQL\\], \\[([^\\]]+)\\]/)\n+ if (/^AC_INIT\\(\\[([^\\]]+)\\], \\[([^\\]]+)\\], \\[([^\\]]+)\\]/)\n {\nWhy did you remove the bit about \"PostgreSQL\"?\n\n+ ENABLE_GSS => $self->{options}->{gss} ? 1 :\nundef,\n[...]\n- if ($self->{options}->{gss})\n- {\n- print $o \"#define ENABLE_GSS 1\\n\";\nI found the part about gss and nls better with the old style. A\nmatter of taste, not really an objection. And your style is actually\nconsistent with USE_ASSERT_CHECKING as well.\n\n+ else\n+ {\n+ croak \"missing: $macro\";\n+ }\n[...]\n+ if (scalar(keys %define) > 0)\n+ {\n+ croak \"unused defines: @{[%define]}\";\n+ }\nSo you have checks both ways. That's nice. \n\n+ open(my $i, '<', \"src/include/pg_config.h.in\")\n+ || confess \"Could not open pg_config.h.in\\n\";\n+ open(my $o, '>', \"src/include/pg_config.h\")\n+ || confess \"Could not write to pg_config.h\\n\";\nFailure to open pg_config.h.\n\nWouldn't it be better to remove pg_config_ext.h.win32 as well?\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 15:30:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On 2019-12-17 07:30, Michael Paquier wrote:\n> The patch looks pretty clean. I have a few minor comments.\n> \n> - if (/^AC_INIT\\(\\[PostgreSQL\\], \\[([^\\]]+)\\]/)\n> + if (/^AC_INIT\\(\\[([^\\]]+)\\], \\[([^\\]]+)\\], \\[([^\\]]+)\\]/)\n> {\n> Why did you remove the bit about \"PostgreSQL\"?\n\nJust to make it more general. If we're going to parse the arguments, \nwhy not parse all of them the same way.\n\n> + open(my $i, '<', \"src/include/pg_config.h.in\")\n> + || confess \"Could not open pg_config.h.in\\n\";\n> + open(my $o, '>', \"src/include/pg_config.h\")\n> + || confess \"Could not write to pg_config.h\\n\";\n> Failure to open pg_config.h.\n> \n> Wouldn't it be better to remove pg_config_ext.h.win32 as well?\n\nYeah, good idea. Attached patch is refactored so all three header files \nmanaged by AC_CONFIG_HEADERS are processed the same way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 17 Dec 2019 11:56:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 11:56:17AM +0100, Peter Eisentraut wrote:\n> Yeah, good idea. Attached patch is refactored so all three header files\n> managed by AC_CONFIG_HEADERS are processed the same way.\n\nLooks good. I just have one comment.\n\n+ # XXX\n+ open(my $f, '>>', 'src/include/pg_config.h')\n+ || confess \"Could not write to src/include/pg_config.h\\n\";\n+ print $f \"\\n\";\n+ print $f \"#define VAL_CONFIGURE \\\"\"\n+ . $self->GetFakeConfigure() . \"\\\"\\n\";\n+ close($f);\n\nThis part needs a comment. Like it is the equivalent of what\nsrc/common/'s Makefile does or something like that?\n--\nMichael",
"msg_date": "Thu, 19 Dec 2019 12:59:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On 2019-12-19 04:59, Michael Paquier wrote:\n> On Tue, Dec 17, 2019 at 11:56:17AM +0100, Peter Eisentraut wrote:\n>> Yeah, good idea. Attached patch is refactored so all three header files\n>> managed by AC_CONFIG_HEADERS are processed the same way.\n> \n> Looks good. I just have one comment.\n> \n> + # XXX\n> + open(my $f, '>>', 'src/include/pg_config.h')\n> + || confess \"Could not write to src/include/pg_config.h\\n\";\n> + print $f \"\\n\";\n> + print $f \"#define VAL_CONFIGURE \\\"\"\n> + . $self->GetFakeConfigure() . \"\\\"\\n\";\n> + close($f);\n> \n> This part needs a comment. Like it is the equivalent of what\n> src/common/'s Makefile does or something like that?\n\nThis was meant to be addressed by \n<https://www.postgresql.org/message-id/flat/6e457870-cef5-5f1d-b57c-fc89cfb8a788%402ndquadrant.com>, \nbut that discussion has not concluded yet. Perhaps it makes more sense \nin this context.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Dec 2019 08:31:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 08:31:05AM +0100, Peter Eisentraut wrote:\n> On 2019-12-19 04:59, Michael Paquier wrote:\n>> This part needs a comment. Like it is the equivalent of what\n>> src/common/'s Makefile does or something like that?\n> \n> This was meant to be addressed by <https://www.postgresql.org/message-id/flat/6e457870-cef5-5f1d-b57c-fc89cfb8a788%402ndquadrant.com>,\n> but that discussion has not concluded yet. Perhaps it makes more sense in\n> this context.\n\nHmm. Your patch does not really change the generation of\nVAL_CONFIGURE in pg_config.h, so I am not sure that this other thread\nis an actual barrier for the improvement discussed here. I would be\nactually fine to just remove the XXX and still use GetFakeConfigure()\nwith VAL_CONFIGURE for now. It would be a good thing to get rid of\npg_config.h.win32 definitely.\n--\nMichael",
"msg_date": "Thu, 19 Dec 2019 16:49:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On 2019-12-19 08:49, Michael Paquier wrote:\n> On Thu, Dec 19, 2019 at 08:31:05AM +0100, Peter Eisentraut wrote:\n>> On 2019-12-19 04:59, Michael Paquier wrote:\n>>> This part needs a comment. Like it is the equivalent of what\n>>> src/common/'s Makefile does or something like that?\n>>\n>> This was meant to be addressed by <https://www.postgresql.org/message-id/flat/6e457870-cef5-5f1d-b57c-fc89cfb8a788%402ndquadrant.com>,\n>> but that discussion has not concluded yet. Perhaps it makes more sense in\n>> this context.\n> \n> Hmm. Your patch does not really change the generation of\n> VAL_CONFIGURE in pg_config.h, so I am not sure that this other thread\n> is an actual barrier for the improvement discussed here. I would be\n> actually fine to just remove the XXX and still use GetFakeConfigure()\n> with VAL_CONFIGURE for now. It would be a good thing to get rid of\n> pg_config.h.win32 definitely.\n\ncommitted with that comment removed\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Dec 2019 09:17:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 09:17:14AM +0100, Peter Eisentraut wrote:\n> committed with that comment removed\n\nYeah, thanks!\n--\nMichael",
"msg_date": "Sat, 21 Dec 2019 11:48:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: automating pg_config.h.win32 maintenance"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.